00:00:00.000 Started by upstream project "autotest-per-patch" build number 132356 00:00:00.000 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.154 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.155 The recommended git tool is: git 00:00:00.155 using credential 00000000-0000-0000-0000-000000000002 00:00:00.161 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.193 Fetching changes from the remote Git repository 00:00:00.196 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.224 Using shallow fetch with depth 1 00:00:00.224 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.224 > git --version # timeout=10 00:00:00.249 > git --version # 'git version 2.39.2' 00:00:00.249 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.270 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.270 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.777 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.791 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.802 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.802 > git config core.sparsecheckout # timeout=10 00:00:07.816 > git read-tree -mu HEAD # timeout=10 00:00:07.835 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.862 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.862 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.948 [Pipeline] Start of Pipeline 00:00:07.964 [Pipeline] library 00:00:07.966 Loading library shm_lib@master 00:00:07.966 Library shm_lib@master is cached. Copying from home. 00:00:07.985 [Pipeline] node 00:00:07.994 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.996 [Pipeline] { 00:00:08.008 [Pipeline] catchError 00:00:08.010 [Pipeline] { 00:00:08.024 [Pipeline] wrap 00:00:08.032 [Pipeline] { 00:00:08.039 [Pipeline] stage 00:00:08.041 [Pipeline] { (Prologue) 00:00:08.258 [Pipeline] sh 00:00:08.541 + logger -p user.info -t JENKINS-CI 00:00:08.559 [Pipeline] echo 00:00:08.561 Node: WFP8 00:00:08.569 [Pipeline] sh 00:00:08.867 [Pipeline] setCustomBuildProperty 00:00:08.878 [Pipeline] echo 00:00:08.880 Cleanup processes 00:00:08.886 [Pipeline] sh 00:00:09.169 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.169 2060284 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.183 [Pipeline] sh 00:00:09.468 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:09.468 ++ grep -v 'sudo pgrep' 00:00:09.468 ++ awk '{print $1}' 00:00:09.468 + sudo kill -9 00:00:09.468 + true 00:00:09.481 [Pipeline] cleanWs 00:00:09.490 [WS-CLEANUP] Deleting project workspace... 00:00:09.490 [WS-CLEANUP] Deferred wipeout is used... 00:00:09.496 [WS-CLEANUP] done 00:00:09.500 [Pipeline] setCustomBuildProperty 00:00:09.511 [Pipeline] sh 00:00:09.789 + sudo git config --global --replace-all safe.directory '*' 00:00:09.875 [Pipeline] httpRequest 00:00:10.255 [Pipeline] echo 00:00:10.257 Sorcerer 10.211.164.20 is alive 00:00:10.267 [Pipeline] retry 00:00:10.269 [Pipeline] { 00:00:10.283 [Pipeline] httpRequest 00:00:10.287 HttpMethod: GET 00:00:10.287 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.288 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:10.304 Response Code: HTTP/1.1 200 OK 00:00:10.305 Success: Status code 200 is in the accepted range: 200,404 00:00:10.305 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:12.994 [Pipeline] } 00:00:13.013 [Pipeline] // retry 00:00:13.021 [Pipeline] sh 00:00:13.307 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.333 [Pipeline] httpRequest 00:00:13.807 [Pipeline] echo 00:00:13.809 Sorcerer 10.211.164.20 is alive 00:00:13.819 [Pipeline] retry 00:00:13.821 [Pipeline] { 00:00:13.837 [Pipeline] httpRequest 00:00:13.841 HttpMethod: GET 00:00:13.841 URL: http://10.211.164.20/packages/spdk_1c7c7c64f9c1fec12ac3e18fc8e22066034ced21.tar.gz 00:00:13.842 Sending request to url: http://10.211.164.20/packages/spdk_1c7c7c64f9c1fec12ac3e18fc8e22066034ced21.tar.gz 00:00:13.859 Response Code: HTTP/1.1 200 OK 00:00:13.859 Success: Status code 200 is in the accepted range: 200,404 00:00:13.859 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_1c7c7c64f9c1fec12ac3e18fc8e22066034ced21.tar.gz 00:00:48.078 [Pipeline] } 00:00:48.096 [Pipeline] // retry 00:00:48.104 [Pipeline] sh 00:00:48.389 + tar --no-same-owner -xf spdk_1c7c7c64f9c1fec12ac3e18fc8e22066034ced21.tar.gz 00:00:50.935 [Pipeline] sh 00:00:51.220 + git -C spdk log --oneline -n5 00:00:51.220 1c7c7c64f test/iscsi_tgt: Remove support for the namespace arg 00:00:51.220 4c583db59 test/nvmf: Solve ambiguity around $NVMF_SECOND_TARGET_IP 00:00:51.220 c788bae60 test/nvmf: Don't pin nvmf_bdevperf and nvmf_target_disconnect to phy 00:00:51.220 e4689ab38 test/nvmf: Remove all transport conditions from the test suites 00:00:51.220 097b7c969 test/nvmf: Drop $RDMA_IP_LIST 00:00:51.230 [Pipeline] } 00:00:51.244 [Pipeline] // stage 00:00:51.252 [Pipeline] stage 00:00:51.254 [Pipeline] { (Prepare) 00:00:51.271 [Pipeline] writeFile 00:00:51.288 [Pipeline] sh 00:00:51.573 + logger -p user.info -t JENKINS-CI 00:00:51.584 [Pipeline] sh 00:00:51.869 + logger -p user.info -t JENKINS-CI 00:00:51.882 [Pipeline] sh 00:00:52.166 + cat autorun-spdk.conf 00:00:52.166 SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.166 SPDK_TEST_NVMF=1 00:00:52.166 SPDK_TEST_NVME_CLI=1 00:00:52.166 SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:52.166 SPDK_TEST_NVMF_NICS=e810 00:00:52.166 SPDK_TEST_VFIOUSER=1 00:00:52.166 SPDK_RUN_UBSAN=1 00:00:52.166 NET_TYPE=phy 00:00:52.174 RUN_NIGHTLY=0 00:00:52.179 [Pipeline] readFile 00:00:52.205 [Pipeline] withEnv 00:00:52.207 [Pipeline] { 00:00:52.219 [Pipeline] sh 00:00:52.505 + set -ex 00:00:52.505 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:00:52.505 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:52.505 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:52.505 ++ SPDK_TEST_NVMF=1 00:00:52.505 ++ SPDK_TEST_NVME_CLI=1 00:00:52.505 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:52.505 ++ SPDK_TEST_NVMF_NICS=e810 00:00:52.505 ++ SPDK_TEST_VFIOUSER=1 00:00:52.505 ++ SPDK_RUN_UBSAN=1 00:00:52.505 ++ NET_TYPE=phy 00:00:52.505 ++ RUN_NIGHTLY=0 00:00:52.505 + case $SPDK_TEST_NVMF_NICS in 00:00:52.505 + DRIVERS=ice 00:00:52.505 + [[ tcp == \r\d\m\a ]] 00:00:52.505 + [[ -n ice ]] 00:00:52.505 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:00:52.505 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:00:52.505 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:00:52.505 rmmod: ERROR: Module irdma is not currently loaded 00:00:52.505 rmmod: ERROR: Module i40iw is not currently loaded 00:00:52.505 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:00:52.505 + true 00:00:52.505 + for D in $DRIVERS 00:00:52.505 + sudo modprobe ice 00:00:52.505 + exit 0 00:00:52.514 [Pipeline] } 00:00:52.529 [Pipeline] // withEnv 00:00:52.535 [Pipeline] } 00:00:52.549 [Pipeline] // stage 00:00:52.559 [Pipeline] catchError 00:00:52.561 [Pipeline] { 00:00:52.577 [Pipeline] timeout 00:00:52.577 Timeout set to expire in 1 hr 0 min 00:00:52.579 [Pipeline] { 00:00:52.593 [Pipeline] stage 00:00:52.595 [Pipeline] { (Tests) 00:00:52.611 [Pipeline] sh 00:00:52.896 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:52.896 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:52.896 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:52.896 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:00:52.896 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:52.896 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:52.896 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:00:52.896 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:52.896 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:00:52.896 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:00:52.896 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:00:52.896 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:52.896 + source /etc/os-release 00:00:52.896 ++ NAME='Fedora Linux' 00:00:52.896 ++ VERSION='39 (Cloud Edition)' 00:00:52.896 ++ ID=fedora 00:00:52.896 ++ VERSION_ID=39 00:00:52.896 ++ VERSION_CODENAME= 00:00:52.896 ++ PLATFORM_ID=platform:f39 00:00:52.896 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:00:52.896 ++ ANSI_COLOR='0;38;2;60;110;180' 00:00:52.896 ++ LOGO=fedora-logo-icon 00:00:52.896 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:00:52.896 ++ HOME_URL=https://fedoraproject.org/ 00:00:52.896 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:00:52.896 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:00:52.896 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:00:52.896 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:00:52.896 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:00:52.896 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:00:52.896 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:00:52.896 ++ SUPPORT_END=2024-11-12 00:00:52.896 ++ VARIANT='Cloud Edition' 00:00:52.896 ++ VARIANT_ID=cloud 00:00:52.896 + uname -a 00:00:52.896 Linux spdk-wfp-08 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:00:52.896 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:00:55.446 Hugepages 00:00:55.446 node hugesize free / total 00:00:55.446 node0 1048576kB 0 / 0 00:00:55.446 node0 2048kB 0 / 0 00:00:55.446 node1 1048576kB 0 / 0 00:00:55.446 node1 2048kB 0 / 0 00:00:55.446 00:00:55.446 Type BDF Vendor Device NUMA Driver Device Block devices 00:00:55.446 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:00:55.446 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:00:55.446 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:00:55.446 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:00:55.446 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:00:55.446 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:00:55.446 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:00:55.446 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:00:55.446 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:00:55.446 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:00:55.446 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:00:55.446 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:00:55.446 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:00:55.446 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:00:55.446 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:00:55.446 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:00:55.446 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:00:55.446 + rm -f /tmp/spdk-ld-path 00:00:55.446 + source autorun-spdk.conf 00:00:55.446 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:55.446 ++ SPDK_TEST_NVMF=1 00:00:55.446 ++ SPDK_TEST_NVME_CLI=1 00:00:55.446 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:55.446 ++ SPDK_TEST_NVMF_NICS=e810 00:00:55.446 ++ SPDK_TEST_VFIOUSER=1 00:00:55.446 ++ SPDK_RUN_UBSAN=1 00:00:55.446 ++ NET_TYPE=phy 00:00:55.446 ++ RUN_NIGHTLY=0 00:00:55.446 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:00:55.446 + [[ -n '' ]] 00:00:55.446 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:55.446 + for M in /var/spdk/build-*-manifest.txt 00:00:55.446 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:00:55.446 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:55.446 + for M in /var/spdk/build-*-manifest.txt 00:00:55.446 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:00:55.446 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:55.446 + for M in /var/spdk/build-*-manifest.txt 00:00:55.446 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:00:55.446 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:00:55.446 ++ uname 00:00:55.446 + [[ Linux == \L\i\n\u\x ]] 00:00:55.446 + sudo dmesg -T 00:00:55.705 + sudo dmesg --clear 00:00:55.705 + dmesg_pid=2061722 00:00:55.705 + [[ Fedora Linux == FreeBSD ]] 00:00:55.705 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:55.705 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:00:55.705 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:00:55.705 + [[ -x /usr/src/fio-static/fio ]] 00:00:55.705 + export FIO_BIN=/usr/src/fio-static/fio 00:00:55.705 + FIO_BIN=/usr/src/fio-static/fio 00:00:55.705 + sudo dmesg -Tw 00:00:55.705 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:00:55.705 + [[ ! -v VFIO_QEMU_BIN ]] 00:00:55.705 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:00:55.705 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:55.705 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:00:55.705 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:00:55.705 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:55.705 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:00:55.705 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:55.705 08:45:11 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:00:55.705 08:45:11 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:55.705 08:45:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:00:55.705 08:45:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:00:55.705 08:45:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:00:55.705 08:45:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:00:55.705 08:45:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:00:55.705 08:45:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:00:55.705 08:45:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:00:55.705 08:45:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:00:55.705 08:45:11 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:00:55.705 08:45:11 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:00:55.705 08:45:11 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:00:55.705 08:45:11 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:00:55.705 08:45:11 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:00:55.705 08:45:11 -- scripts/common.sh@15 -- $ shopt -s extglob 00:00:55.705 08:45:11 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:00:55.705 08:45:11 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:00:55.705 08:45:11 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:00:55.705 08:45:11 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:55.705 08:45:11 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:55.705 08:45:11 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:55.705 08:45:11 -- paths/export.sh@5 -- $ export PATH 00:00:55.705 08:45:11 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:00:55.705 08:45:11 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:00:55.705 08:45:11 -- common/autobuild_common.sh@493 -- $ date +%s 00:00:55.705 08:45:11 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732088711.XXXXXX 00:00:55.705 08:45:11 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732088711.DgUB1X 00:00:55.705 08:45:11 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:00:55.705 08:45:11 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:00:55.705 08:45:11 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:00:55.705 08:45:11 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:00:55.706 08:45:11 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:00:55.706 08:45:11 -- common/autobuild_common.sh@509 -- $ get_config_params 00:00:55.706 08:45:11 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:00:55.706 08:45:11 -- common/autotest_common.sh@10 -- $ set +x 00:00:55.706 08:45:11 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:00:55.706 08:45:11 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:00:55.706 08:45:11 -- pm/common@17 -- $ local monitor 00:00:55.706 08:45:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:55.706 08:45:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:55.706 08:45:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:55.706 08:45:11 -- pm/common@21 -- $ date +%s 00:00:55.706 08:45:11 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:00:55.706 08:45:11 -- pm/common@21 -- $ date +%s 00:00:55.706 08:45:11 -- pm/common@25 -- $ sleep 1 00:00:55.706 08:45:11 -- pm/common@21 -- $ date +%s 00:00:55.706 08:45:11 -- pm/common@21 -- $ date +%s 00:00:55.706 08:45:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732088711 00:00:55.706 08:45:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732088711 00:00:55.706 08:45:11 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732088711 00:00:55.706 08:45:11 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732088711 00:00:55.964 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732088711_collect-cpu-load.pm.log 00:00:55.964 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732088711_collect-vmstat.pm.log 00:00:55.964 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732088711_collect-cpu-temp.pm.log 00:00:55.964 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732088711_collect-bmc-pm.bmc.pm.log 00:00:56.902 08:45:12 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:00:56.902 08:45:12 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:00:56.902 08:45:12 -- spdk/autobuild.sh@12 -- $ umask 022 00:00:56.902 08:45:12 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:56.902 08:45:12 -- spdk/autobuild.sh@16 -- $ date -u 00:00:56.902 Wed Nov 20 07:45:12 AM UTC 2024 00:00:56.902 08:45:12 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:00:56.902 v25.01-pre-207-g1c7c7c64f 00:00:56.902 08:45:12 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:00:56.902 08:45:12 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:00:56.902 08:45:12 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:00:56.902 08:45:12 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:00:56.902 08:45:12 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:00:56.902 08:45:12 -- common/autotest_common.sh@10 -- $ set +x 00:00:56.902 ************************************ 00:00:56.902 START TEST ubsan 00:00:56.902 ************************************ 00:00:56.902 08:45:12 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:00:56.902 using ubsan 00:00:56.902 00:00:56.902 real 0m0.001s 00:00:56.902 user 0m0.000s 00:00:56.902 sys 0m0.000s 00:00:56.902 08:45:12 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:00:56.902 08:45:12 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:00:56.902 ************************************ 00:00:56.902 END TEST ubsan 00:00:56.902 ************************************ 00:00:56.902 08:45:12 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:00:56.902 08:45:12 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:00:56.902 08:45:12 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:00:56.902 08:45:12 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:00:56.902 08:45:12 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:00:56.902 08:45:12 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:00:56.902 08:45:12 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:00:56.902 08:45:12 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:00:56.902 08:45:12 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:00:57.161 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:00:57.161 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:00:57.420 Using 'verbs' RDMA provider 00:01:10.206 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:22.415 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:22.415 Creating mk/config.mk...done. 00:01:22.415 Creating mk/cc.flags.mk...done. 00:01:22.415 Type 'make' to build. 00:01:22.415 08:45:38 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:01:22.415 08:45:38 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:22.415 08:45:38 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:22.415 08:45:38 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.416 ************************************ 00:01:22.416 START TEST make 00:01:22.416 ************************************ 00:01:22.416 08:45:38 make -- common/autotest_common.sh@1129 -- $ make -j96 00:01:22.984 make[1]: Nothing to be done for 'all'. 00:01:24.374 The Meson build system 00:01:24.375 Version: 1.5.0 00:01:24.375 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:24.375 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:24.375 Build type: native build 00:01:24.375 Project name: libvfio-user 00:01:24.375 Project version: 0.0.1 00:01:24.375 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:24.375 C linker for the host machine: cc ld.bfd 2.40-14 00:01:24.375 Host machine cpu family: x86_64 00:01:24.375 Host machine cpu: x86_64 00:01:24.375 Run-time dependency threads found: YES 00:01:24.375 Library dl found: YES 00:01:24.375 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:24.375 Run-time dependency json-c found: YES 0.17 00:01:24.375 Run-time dependency cmocka found: YES 1.1.7 00:01:24.375 Program pytest-3 found: NO 00:01:24.375 Program flake8 found: NO 00:01:24.375 Program misspell-fixer found: NO 00:01:24.375 Program restructuredtext-lint found: NO 00:01:24.375 Program valgrind found: YES (/usr/bin/valgrind) 00:01:24.375 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:24.375 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:24.375 Compiler for C supports arguments -Wwrite-strings: YES 00:01:24.375 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:24.375 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:24.375 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:24.375 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:24.375 Build targets in project: 8 00:01:24.375 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:24.375 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:24.375 00:01:24.375 libvfio-user 0.0.1 00:01:24.375 00:01:24.375 User defined options 00:01:24.375 buildtype : debug 00:01:24.375 default_library: shared 00:01:24.375 libdir : /usr/local/lib 00:01:24.375 00:01:24.375 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:24.940 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:24.940 [1/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:24.940 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:24.940 [3/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:24.940 [4/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:24.940 [5/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:24.940 [6/37] Compiling C object samples/null.p/null.c.o 00:01:24.940 [7/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:24.940 [8/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:24.940 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:24.940 [10/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:24.940 [11/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:24.940 [12/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:24.940 [13/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:24.940 [14/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:24.940 [15/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:24.940 [16/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:24.940 [17/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:24.940 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:24.940 [19/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:24.940 [20/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:24.940 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:24.940 [22/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:24.940 [23/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:24.940 [24/37] Compiling C object samples/server.p/server.c.o 00:01:24.940 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:24.940 [26/37] Compiling C object samples/client.p/client.c.o 00:01:24.940 [27/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:24.940 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:24.940 [29/37] Linking target samples/client 00:01:25.198 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:25.198 [31/37] Linking target test/unit_tests 00:01:25.198 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:25.198 [33/37] Linking target samples/server 00:01:25.198 [34/37] Linking target samples/null 00:01:25.198 [35/37] Linking target samples/gpio-pci-idio-16 00:01:25.198 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:25.198 [37/37] Linking target samples/lspci 00:01:25.198 INFO: autodetecting backend as ninja 00:01:25.198 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:25.456 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:25.714 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:25.714 ninja: no work to do. 00:01:30.990 The Meson build system 00:01:30.990 Version: 1.5.0 00:01:30.990 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:30.990 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:30.990 Build type: native build 00:01:30.990 Program cat found: YES (/usr/bin/cat) 00:01:30.990 Project name: DPDK 00:01:30.990 Project version: 24.03.0 00:01:30.990 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:01:30.990 C linker for the host machine: cc ld.bfd 2.40-14 00:01:30.990 Host machine cpu family: x86_64 00:01:30.990 Host machine cpu: x86_64 00:01:30.990 Message: ## Building in Developer Mode ## 00:01:30.990 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:30.990 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:30.990 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:30.990 Program python3 found: YES (/usr/bin/python3) 00:01:30.990 Program cat found: YES (/usr/bin/cat) 00:01:30.990 Compiler for C supports arguments -march=native: YES 00:01:30.990 Checking for size of "void *" : 8 00:01:30.990 Checking for size of "void *" : 8 (cached) 00:01:30.990 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:01:30.990 Library m found: YES 00:01:30.990 Library numa found: YES 00:01:30.990 Has header "numaif.h" : YES 00:01:30.990 Library fdt found: NO 00:01:30.990 Library execinfo found: NO 00:01:30.990 Has header "execinfo.h" : YES 00:01:30.990 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:01:30.990 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:30.990 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:30.990 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:30.990 Run-time dependency openssl found: YES 3.1.1 00:01:30.990 Run-time dependency libpcap found: YES 1.10.4 00:01:30.990 Has header "pcap.h" with dependency libpcap: YES 00:01:30.990 Compiler for C supports arguments -Wcast-qual: YES 00:01:30.990 Compiler for C supports arguments -Wdeprecated: YES 00:01:30.990 Compiler for C supports arguments -Wformat: YES 00:01:30.991 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:30.991 Compiler for C supports arguments -Wformat-security: NO 00:01:30.991 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:30.991 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:30.991 Compiler for C supports arguments -Wnested-externs: YES 00:01:30.991 Compiler for C supports arguments -Wold-style-definition: YES 00:01:30.991 Compiler for C supports arguments -Wpointer-arith: YES 00:01:30.991 Compiler for C supports arguments -Wsign-compare: YES 00:01:30.991 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:30.991 Compiler for C supports arguments -Wundef: YES 00:01:30.991 Compiler for C supports arguments -Wwrite-strings: YES 00:01:30.991 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:30.991 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:30.991 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:30.991 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:30.991 Program objdump found: YES (/usr/bin/objdump) 00:01:30.991 Compiler for C supports arguments -mavx512f: YES 00:01:30.991 Checking if "AVX512 checking" compiles: YES 00:01:30.991 Fetching value of define "__SSE4_2__" : 1 00:01:30.991 Fetching value of define "__AES__" : 1 00:01:30.991 Fetching value of define "__AVX__" : 1 00:01:30.991 Fetching value of define "__AVX2__" : 1 00:01:30.991 Fetching value of define "__AVX512BW__" : 1 00:01:30.991 Fetching value of define "__AVX512CD__" : 1 00:01:30.991 Fetching value of define "__AVX512DQ__" : 1 00:01:30.991 Fetching value of define "__AVX512F__" : 1 00:01:30.991 Fetching value of define "__AVX512VL__" : 1 00:01:30.991 Fetching value of define "__PCLMUL__" : 1 00:01:30.991 Fetching value of define "__RDRND__" : 1 00:01:30.991 Fetching value of define "__RDSEED__" : 1 00:01:30.991 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:30.991 Fetching value of define "__znver1__" : (undefined) 00:01:30.991 Fetching value of define "__znver2__" : (undefined) 00:01:30.991 Fetching value of define "__znver3__" : (undefined) 00:01:30.991 Fetching value of define "__znver4__" : (undefined) 00:01:30.991 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:30.991 Message: lib/log: Defining dependency "log" 00:01:30.991 Message: lib/kvargs: Defining dependency "kvargs" 00:01:30.991 Message: lib/telemetry: Defining dependency "telemetry" 00:01:30.991 Checking for function "getentropy" : NO 00:01:30.991 Message: lib/eal: Defining dependency "eal" 00:01:30.991 Message: lib/ring: Defining dependency "ring" 00:01:30.991 Message: lib/rcu: Defining dependency "rcu" 00:01:30.991 Message: lib/mempool: Defining dependency "mempool" 00:01:30.991 Message: lib/mbuf: Defining dependency "mbuf" 00:01:30.991 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:30.991 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:30.991 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:30.991 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:30.991 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:30.991 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:30.991 Compiler for C supports arguments -mpclmul: YES 00:01:30.991 Compiler for C supports arguments -maes: YES 00:01:30.991 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:30.991 Compiler for C supports arguments -mavx512bw: YES 00:01:30.991 Compiler for C supports arguments -mavx512dq: YES 00:01:30.991 Compiler for C supports arguments -mavx512vl: YES 00:01:30.991 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:30.991 Compiler for C supports arguments -mavx2: YES 00:01:30.991 Compiler for C supports arguments -mavx: YES 00:01:30.991 Message: lib/net: Defining dependency "net" 00:01:30.991 Message: lib/meter: Defining dependency "meter" 00:01:30.991 Message: lib/ethdev: Defining dependency "ethdev" 00:01:30.991 Message: lib/pci: Defining dependency "pci" 00:01:30.991 Message: lib/cmdline: Defining dependency "cmdline" 00:01:30.991 Message: lib/hash: Defining dependency "hash" 00:01:30.991 Message: lib/timer: Defining dependency "timer" 00:01:30.991 Message: lib/compressdev: Defining dependency "compressdev" 00:01:30.991 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:30.991 Message: lib/dmadev: Defining dependency "dmadev" 00:01:30.991 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:30.991 Message: lib/power: Defining dependency "power" 00:01:30.991 Message: lib/reorder: Defining dependency "reorder" 00:01:30.991 Message: lib/security: Defining dependency "security" 00:01:30.991 Has header "linux/userfaultfd.h" : YES 00:01:30.991 Has header "linux/vduse.h" : YES 00:01:30.991 Message: lib/vhost: Defining dependency "vhost" 00:01:30.991 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:30.991 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:30.991 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:30.991 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:30.991 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:30.991 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:30.991 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:30.991 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:30.991 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:30.991 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:30.991 Program doxygen found: YES (/usr/local/bin/doxygen) 00:01:30.991 Configuring doxy-api-html.conf using configuration 00:01:30.991 Configuring doxy-api-man.conf using configuration 00:01:30.991 Program mandb found: YES (/usr/bin/mandb) 00:01:30.991 Program sphinx-build found: NO 00:01:30.991 Configuring rte_build_config.h using configuration 00:01:30.991 Message: 00:01:30.991 ================= 00:01:30.991 Applications Enabled 00:01:30.991 ================= 00:01:30.991 00:01:30.991 apps: 00:01:30.991 00:01:30.991 00:01:30.991 Message: 00:01:30.991 ================= 00:01:30.991 Libraries Enabled 00:01:30.991 ================= 00:01:30.991 00:01:30.991 libs: 00:01:30.991 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:30.991 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:30.991 cryptodev, dmadev, power, reorder, security, vhost, 00:01:30.991 00:01:30.991 Message: 00:01:30.991 =============== 00:01:30.991 Drivers Enabled 00:01:30.991 =============== 00:01:30.991 00:01:30.991 common: 00:01:30.991 00:01:30.991 bus: 00:01:30.991 pci, vdev, 00:01:30.991 mempool: 00:01:30.991 ring, 00:01:30.991 dma: 00:01:30.991 00:01:30.991 net: 00:01:30.991 00:01:30.991 crypto: 00:01:30.991 00:01:30.991 compress: 00:01:30.991 00:01:30.991 vdpa: 00:01:30.991 00:01:30.991 00:01:30.991 Message: 00:01:30.991 ================= 00:01:30.991 Content Skipped 00:01:30.991 ================= 00:01:30.991 00:01:30.991 apps: 00:01:30.991 dumpcap: explicitly disabled via build config 00:01:30.991 graph: explicitly disabled via build config 00:01:30.991 pdump: explicitly disabled via build config 00:01:30.991 proc-info: explicitly disabled via build config 00:01:30.991 test-acl: explicitly disabled via build config 00:01:30.991 test-bbdev: explicitly disabled via build config 00:01:30.991 test-cmdline: explicitly disabled via build config 00:01:30.991 test-compress-perf: explicitly disabled via build config 00:01:30.991 test-crypto-perf: explicitly disabled via build config 00:01:30.991 test-dma-perf: explicitly disabled via build config 00:01:30.991 test-eventdev: explicitly disabled via build config 00:01:30.991 test-fib: explicitly disabled via build config 00:01:30.991 test-flow-perf: explicitly disabled via build config 00:01:30.991 test-gpudev: explicitly disabled via build config 00:01:30.991 test-mldev: explicitly disabled via build config 00:01:30.991 test-pipeline: explicitly disabled via build config 00:01:30.991 test-pmd: explicitly disabled via build config 00:01:30.991 test-regex: explicitly disabled via build config 00:01:30.991 test-sad: explicitly disabled via build config 00:01:30.991 test-security-perf: explicitly disabled via build config 00:01:30.991 00:01:30.991 libs: 00:01:30.991 argparse: explicitly disabled via build config 00:01:30.991 metrics: explicitly disabled via build config 00:01:30.991 acl: explicitly disabled via build config 00:01:30.991 bbdev: explicitly disabled via build config 00:01:30.991 bitratestats: explicitly disabled via build config 00:01:30.991 bpf: explicitly disabled via build config 00:01:30.991 cfgfile: explicitly disabled via build config 00:01:30.991 distributor: explicitly disabled via build config 00:01:30.991 efd: explicitly disabled via build config 00:01:30.991 eventdev: explicitly disabled via build config 00:01:30.991 dispatcher: explicitly disabled via build config 00:01:30.991 gpudev: explicitly disabled via build config 00:01:30.991 gro: explicitly disabled via build config 00:01:30.991 gso: explicitly disabled via build config 00:01:30.991 ip_frag: explicitly disabled via build config 00:01:30.991 jobstats: explicitly disabled via build config 00:01:30.991 latencystats: explicitly disabled via build config 00:01:30.991 lpm: explicitly disabled via build config 00:01:30.991 member: explicitly disabled via build config 00:01:30.991 pcapng: explicitly disabled via build config 00:01:30.991 rawdev: explicitly disabled via build config 00:01:30.991 regexdev: explicitly disabled via build config 00:01:30.991 mldev: explicitly disabled via build config 00:01:30.991 rib: explicitly disabled via build config 00:01:30.991 sched: explicitly disabled via build config 00:01:30.991 stack: explicitly disabled via build config 00:01:30.991 ipsec: explicitly disabled via build config 00:01:30.991 pdcp: explicitly disabled via build config 00:01:30.991 fib: explicitly disabled via build config 00:01:30.991 port: explicitly disabled via build config 00:01:30.991 pdump: explicitly disabled via build config 00:01:30.991 table: explicitly disabled via build config 00:01:30.991 pipeline: explicitly disabled via build config 00:01:30.991 graph: explicitly disabled via build config 00:01:30.991 node: explicitly disabled via build config 00:01:30.991 00:01:30.991 drivers: 00:01:30.991 common/cpt: not in enabled drivers build config 00:01:30.992 common/dpaax: not in enabled drivers build config 00:01:30.992 common/iavf: not in enabled drivers build config 00:01:30.992 common/idpf: not in enabled drivers build config 00:01:30.992 common/ionic: not in enabled drivers build config 00:01:30.992 common/mvep: not in enabled drivers build config 00:01:30.992 common/octeontx: not in enabled drivers build config 00:01:30.992 bus/auxiliary: not in enabled drivers build config 00:01:30.992 bus/cdx: not in enabled drivers build config 00:01:30.992 bus/dpaa: not in enabled drivers build config 00:01:30.992 bus/fslmc: not in enabled drivers build config 00:01:30.992 bus/ifpga: not in enabled drivers build config 00:01:30.992 bus/platform: not in enabled drivers build config 00:01:30.992 bus/uacce: not in enabled drivers build config 00:01:30.992 bus/vmbus: not in enabled drivers build config 00:01:30.992 common/cnxk: not in enabled drivers build config 00:01:30.992 common/mlx5: not in enabled drivers build config 00:01:30.992 common/nfp: not in enabled drivers build config 00:01:30.992 common/nitrox: not in enabled drivers build config 00:01:30.992 common/qat: not in enabled drivers build config 00:01:30.992 common/sfc_efx: not in enabled drivers build config 00:01:30.992 mempool/bucket: not in enabled drivers build config 00:01:30.992 mempool/cnxk: not in enabled drivers build config 00:01:30.992 mempool/dpaa: not in enabled drivers build config 00:01:30.992 mempool/dpaa2: not in enabled drivers build config 00:01:30.992 mempool/octeontx: not in enabled drivers build config 00:01:30.992 mempool/stack: not in enabled drivers build config 00:01:30.992 dma/cnxk: not in enabled drivers build config 00:01:30.992 dma/dpaa: not in enabled drivers build config 00:01:30.992 dma/dpaa2: not in enabled drivers build config 00:01:30.992 dma/hisilicon: not in enabled drivers build config 00:01:30.992 dma/idxd: not in enabled drivers build config 00:01:30.992 dma/ioat: not in enabled drivers build config 00:01:30.992 dma/skeleton: not in enabled drivers build config 00:01:30.992 net/af_packet: not in enabled drivers build config 00:01:30.992 net/af_xdp: not in enabled drivers build config 00:01:30.992 net/ark: not in enabled drivers build config 00:01:30.992 net/atlantic: not in enabled drivers build config 00:01:30.992 net/avp: not in enabled drivers build config 00:01:30.992 net/axgbe: not in enabled drivers build config 00:01:30.992 net/bnx2x: not in enabled drivers build config 00:01:30.992 net/bnxt: not in enabled drivers build config 00:01:30.992 net/bonding: not in enabled drivers build config 00:01:30.992 net/cnxk: not in enabled drivers build config 00:01:30.992 net/cpfl: not in enabled drivers build config 00:01:30.992 net/cxgbe: not in enabled drivers build config 00:01:30.992 net/dpaa: not in enabled drivers build config 00:01:30.992 net/dpaa2: not in enabled drivers build config 00:01:30.992 net/e1000: not in enabled drivers build config 00:01:30.992 net/ena: not in enabled drivers build config 00:01:30.992 net/enetc: not in enabled drivers build config 00:01:30.992 net/enetfec: not in enabled drivers build config 00:01:30.992 net/enic: not in enabled drivers build config 00:01:30.992 net/failsafe: not in enabled drivers build config 00:01:30.992 net/fm10k: not in enabled drivers build config 00:01:30.992 net/gve: not in enabled drivers build config 00:01:30.992 net/hinic: not in enabled drivers build config 00:01:30.992 net/hns3: not in enabled drivers build config 00:01:30.992 net/i40e: not in enabled drivers build config 00:01:30.992 net/iavf: not in enabled drivers build config 00:01:30.992 net/ice: not in enabled drivers build config 00:01:30.992 net/idpf: not in enabled drivers build config 00:01:30.992 net/igc: not in enabled drivers build config 00:01:30.992 net/ionic: not in enabled drivers build config 00:01:30.992 net/ipn3ke: not in enabled drivers build config 00:01:30.992 net/ixgbe: not in enabled drivers build config 00:01:30.992 net/mana: not in enabled drivers build config 00:01:30.992 net/memif: not in enabled drivers build config 00:01:30.992 net/mlx4: not in enabled drivers build config 00:01:30.992 net/mlx5: not in enabled drivers build config 00:01:30.992 net/mvneta: not in enabled drivers build config 00:01:30.992 net/mvpp2: not in enabled drivers build config 00:01:30.992 net/netvsc: not in enabled drivers build config 00:01:30.992 net/nfb: not in enabled drivers build config 00:01:30.992 net/nfp: not in enabled drivers build config 00:01:30.992 net/ngbe: not in enabled drivers build config 00:01:30.992 net/null: not in enabled drivers build config 00:01:30.992 net/octeontx: not in enabled drivers build config 00:01:30.992 net/octeon_ep: not in enabled drivers build config 00:01:30.992 net/pcap: not in enabled drivers build config 00:01:30.992 net/pfe: not in enabled drivers build config 00:01:30.992 net/qede: not in enabled drivers build config 00:01:30.992 net/ring: not in enabled drivers build config 00:01:30.992 net/sfc: not in enabled drivers build config 00:01:30.992 net/softnic: not in enabled drivers build config 00:01:30.992 net/tap: not in enabled drivers build config 00:01:30.992 net/thunderx: not in enabled drivers build config 00:01:30.992 net/txgbe: not in enabled drivers build config 00:01:30.992 net/vdev_netvsc: not in enabled drivers build config 00:01:30.992 net/vhost: not in enabled drivers build config 00:01:30.992 net/virtio: not in enabled drivers build config 00:01:30.992 net/vmxnet3: not in enabled drivers build config 00:01:30.992 raw/*: missing internal dependency, "rawdev" 00:01:30.992 crypto/armv8: not in enabled drivers build config 00:01:30.992 crypto/bcmfs: not in enabled drivers build config 00:01:30.992 crypto/caam_jr: not in enabled drivers build config 00:01:30.992 crypto/ccp: not in enabled drivers build config 00:01:30.992 crypto/cnxk: not in enabled drivers build config 00:01:30.992 crypto/dpaa_sec: not in enabled drivers build config 00:01:30.992 crypto/dpaa2_sec: not in enabled drivers build config 00:01:30.992 crypto/ipsec_mb: not in enabled drivers build config 00:01:30.992 crypto/mlx5: not in enabled drivers build config 00:01:30.992 crypto/mvsam: not in enabled drivers build config 00:01:30.992 crypto/nitrox: not in enabled drivers build config 00:01:30.992 crypto/null: not in enabled drivers build config 00:01:30.992 crypto/octeontx: not in enabled drivers build config 00:01:30.992 crypto/openssl: not in enabled drivers build config 00:01:30.992 crypto/scheduler: not in enabled drivers build config 00:01:30.992 crypto/uadk: not in enabled drivers build config 00:01:30.992 crypto/virtio: not in enabled drivers build config 00:01:30.992 compress/isal: not in enabled drivers build config 00:01:30.992 compress/mlx5: not in enabled drivers build config 00:01:30.992 compress/nitrox: not in enabled drivers build config 00:01:30.992 compress/octeontx: not in enabled drivers build config 00:01:30.992 compress/zlib: not in enabled drivers build config 00:01:30.992 regex/*: missing internal dependency, "regexdev" 00:01:30.992 ml/*: missing internal dependency, "mldev" 00:01:30.992 vdpa/ifc: not in enabled drivers build config 00:01:30.992 vdpa/mlx5: not in enabled drivers build config 00:01:30.992 vdpa/nfp: not in enabled drivers build config 00:01:30.992 vdpa/sfc: not in enabled drivers build config 00:01:30.992 event/*: missing internal dependency, "eventdev" 00:01:30.992 baseband/*: missing internal dependency, "bbdev" 00:01:30.992 gpu/*: missing internal dependency, "gpudev" 00:01:30.992 00:01:30.992 00:01:30.992 Build targets in project: 85 00:01:30.992 00:01:30.992 DPDK 24.03.0 00:01:30.992 00:01:30.992 User defined options 00:01:30.992 buildtype : debug 00:01:30.992 default_library : shared 00:01:30.992 libdir : lib 00:01:30.992 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:30.992 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:30.992 c_link_args : 00:01:30.992 cpu_instruction_set: native 00:01:30.992 disable_apps : test-dma-perf,test,test-sad,test-acl,test-pmd,test-mldev,test-compress-perf,test-cmdline,test-regex,test-fib,graph,test-bbdev,dumpcap,test-gpudev,proc-info,test-pipeline,test-flow-perf,test-crypto-perf,pdump,test-eventdev,test-security-perf 00:01:30.992 disable_libs : port,lpm,ipsec,regexdev,dispatcher,argparse,bitratestats,rawdev,stack,graph,acl,bbdev,pipeline,member,sched,pcapng,mldev,eventdev,efd,metrics,latencystats,cfgfile,ip_frag,jobstats,pdump,pdcp,rib,node,fib,distributor,gso,table,bpf,gpudev,gro 00:01:30.992 enable_docs : false 00:01:30.992 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:01:30.992 enable_kmods : false 00:01:30.992 max_lcores : 128 00:01:30.992 tests : false 00:01:30.992 00:01:30.992 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:31.320 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:31.629 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:31.629 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:31.629 [3/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:31.629 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:31.629 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:31.629 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:31.629 [7/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:31.629 [8/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:31.629 [9/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:31.629 [10/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:31.629 [11/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:31.629 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:31.629 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:31.629 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:31.629 [15/268] Linking static target lib/librte_kvargs.a 00:01:31.629 [16/268] Linking static target lib/librte_log.a 00:01:31.629 [17/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:31.629 [18/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:31.629 [19/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:31.891 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:31.891 [21/268] Linking static target lib/librte_pci.a 00:01:31.891 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:31.891 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:31.891 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:31.891 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:31.891 [26/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:32.150 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:32.150 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:32.150 [29/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:32.150 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:32.150 [31/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:32.150 [32/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:32.150 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:32.150 [34/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:32.150 [35/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:32.150 [36/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:32.150 [37/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:32.150 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:32.150 [39/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:32.150 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:32.150 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:32.150 [42/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:32.150 [43/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:32.150 [44/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:32.150 [45/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:32.150 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:32.150 [47/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:32.150 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:32.150 [49/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:32.150 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:32.150 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:32.150 [52/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:32.150 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:32.150 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:32.150 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:32.150 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:32.150 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:32.150 [58/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:32.150 [59/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:32.150 [60/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:32.150 [61/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:32.150 [62/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:32.150 [63/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:32.150 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:32.150 [65/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:32.150 [66/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:32.150 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:32.150 [68/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:32.150 [69/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:32.150 [70/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:32.150 [71/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:32.150 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:32.150 [73/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:32.150 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:32.150 [75/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:32.150 [76/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:32.151 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:32.151 [78/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:32.151 [79/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:32.151 [80/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:32.151 [81/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:32.151 [82/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:32.151 [83/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:32.151 [84/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:32.151 [85/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:32.151 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:32.151 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:32.151 [88/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:32.151 [89/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:32.151 [90/268] Linking static target lib/librte_meter.a 00:01:32.151 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:32.151 [92/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:32.151 [93/268] Linking static target lib/librte_ring.a 00:01:32.151 [94/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:32.151 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:32.151 [96/268] Linking static target lib/librte_telemetry.a 00:01:32.151 [97/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:32.151 [98/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:32.151 [99/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:32.151 [100/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:32.151 [101/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:32.151 [102/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:32.151 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:32.151 [104/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.151 [105/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:32.151 [106/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:32.151 [107/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:32.151 [108/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.151 [109/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:32.151 [110/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:32.151 [111/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:32.151 [112/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:32.151 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:32.151 [114/268] Linking static target lib/librte_rcu.a 00:01:32.151 [115/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:32.151 [116/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:32.151 [117/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:32.151 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:32.408 [119/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:32.408 [120/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:32.408 [121/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:32.408 [122/268] Linking static target lib/librte_mempool.a 00:01:32.408 [123/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:32.408 [124/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:32.408 [125/268] Linking static target lib/librte_net.a 00:01:32.408 [126/268] Linking static target lib/librte_eal.a 00:01:32.408 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:32.408 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:32.408 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:32.408 [130/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:32.408 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:32.408 [132/268] Linking static target lib/librte_cmdline.a 00:01:32.408 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:32.408 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:32.408 [135/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:32.408 [136/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.408 [137/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:32.408 [138/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.408 [139/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:32.408 [140/268] Linking target lib/librte_log.so.24.1 00:01:32.408 [141/268] Linking static target lib/librte_mbuf.a 00:01:32.408 [142/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:32.408 [143/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:32.408 [144/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.408 [145/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.408 [146/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:32.666 [147/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:32.666 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:32.666 [149/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:32.666 [150/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.666 [151/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:32.666 [152/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:32.666 [153/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:32.667 [154/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:32.667 [155/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:32.667 [156/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:32.667 [157/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:32.667 [158/268] Linking static target lib/librte_dmadev.a 00:01:32.667 [159/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:32.667 [160/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:32.667 [161/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:32.667 [162/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:32.667 [163/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.667 [164/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:32.667 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:32.667 [166/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:32.667 [167/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:32.667 [168/268] Linking static target lib/librte_reorder.a 00:01:32.667 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:32.667 [170/268] Linking static target lib/librte_timer.a 00:01:32.667 [171/268] Linking target lib/librte_kvargs.so.24.1 00:01:32.667 [172/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:32.667 [173/268] Linking target lib/librte_telemetry.so.24.1 00:01:32.667 [174/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:32.667 [175/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:32.667 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:32.667 [177/268] Linking static target lib/librte_security.a 00:01:32.667 [178/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:32.667 [179/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:32.667 [180/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:32.667 [181/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:32.667 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:32.667 [183/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:32.667 [184/268] Linking static target lib/librte_compressdev.a 00:01:32.667 [185/268] Linking static target lib/librte_power.a 00:01:32.667 [186/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:32.667 [187/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:32.667 [188/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:32.667 [189/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:32.667 [190/268] Linking static target lib/librte_hash.a 00:01:32.667 [191/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:32.925 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:32.925 [193/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:32.925 [194/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:32.925 [195/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:32.925 [196/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:32.925 [197/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:32.925 [198/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:32.925 [199/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:32.925 [200/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:32.925 [201/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:32.925 [202/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:32.925 [203/268] Linking static target drivers/librte_bus_vdev.a 00:01:32.925 [204/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:32.925 [205/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:32.925 [206/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:32.925 [207/268] Linking static target lib/librte_cryptodev.a 00:01:32.925 [208/268] Linking static target drivers/librte_bus_pci.a 00:01:32.925 [209/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:32.926 [210/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:32.926 [211/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:32.926 [212/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:33.185 [213/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.185 [214/268] Linking static target drivers/librte_mempool_ring.a 00:01:33.185 [215/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.185 [216/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.185 [217/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.185 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.185 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:33.443 [220/268] Linking static target lib/librte_ethdev.a 00:01:33.443 [221/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.443 [222/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.443 [223/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:33.443 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.701 [225/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.701 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:33.701 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:34.638 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:34.638 [229/268] Linking static target lib/librte_vhost.a 00:01:34.896 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:36.800 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.069 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.327 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:42.327 [234/268] Linking target lib/librte_eal.so.24.1 00:01:42.327 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:42.586 [236/268] Linking target lib/librte_pci.so.24.1 00:01:42.586 [237/268] Linking target lib/librte_ring.so.24.1 00:01:42.586 [238/268] Linking target lib/librte_meter.so.24.1 00:01:42.586 [239/268] Linking target lib/librte_timer.so.24.1 00:01:42.586 [240/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:42.586 [241/268] Linking target lib/librte_dmadev.so.24.1 00:01:42.586 [242/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:42.586 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:42.586 [244/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:42.586 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:42.586 [246/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:42.586 [247/268] Linking target lib/librte_rcu.so.24.1 00:01:42.586 [248/268] Linking target lib/librte_mempool.so.24.1 00:01:42.586 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:42.844 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:42.844 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:42.844 [252/268] Linking target lib/librte_mbuf.so.24.1 00:01:42.844 [253/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:42.844 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:43.102 [255/268] Linking target lib/librte_compressdev.so.24.1 00:01:43.102 [256/268] Linking target lib/librte_reorder.so.24.1 00:01:43.102 [257/268] Linking target lib/librte_net.so.24.1 00:01:43.102 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:01:43.102 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:43.102 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:43.102 [261/268] Linking target lib/librte_security.so.24.1 00:01:43.102 [262/268] Linking target lib/librte_cmdline.so.24.1 00:01:43.102 [263/268] Linking target lib/librte_hash.so.24.1 00:01:43.102 [264/268] Linking target lib/librte_ethdev.so.24.1 00:01:43.360 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:43.360 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:43.360 [267/268] Linking target lib/librte_power.so.24.1 00:01:43.360 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:43.360 INFO: autodetecting backend as ninja 00:01:43.360 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:01:55.569 CC lib/log/log.o 00:01:55.569 CC lib/log/log_flags.o 00:01:55.569 CC lib/log/log_deprecated.o 00:01:55.569 CC lib/ut_mock/mock.o 00:01:55.569 CC lib/ut/ut.o 00:01:55.569 LIB libspdk_log.a 00:01:55.569 LIB libspdk_ut_mock.a 00:01:55.569 LIB libspdk_ut.a 00:01:55.569 SO libspdk_ut_mock.so.6.0 00:01:55.569 SO libspdk_log.so.7.1 00:01:55.569 SO libspdk_ut.so.2.0 00:01:55.569 SYMLINK libspdk_ut_mock.so 00:01:55.569 SYMLINK libspdk_log.so 00:01:55.569 SYMLINK libspdk_ut.so 00:01:55.569 CC lib/ioat/ioat.o 00:01:55.569 CC lib/dma/dma.o 00:01:55.569 CXX lib/trace_parser/trace.o 00:01:55.569 CC lib/util/base64.o 00:01:55.569 CC lib/util/bit_array.o 00:01:55.569 CC lib/util/cpuset.o 00:01:55.569 CC lib/util/crc16.o 00:01:55.569 CC lib/util/crc32.o 00:01:55.569 CC lib/util/crc32c.o 00:01:55.569 CC lib/util/crc32_ieee.o 00:01:55.569 CC lib/util/crc64.o 00:01:55.569 CC lib/util/dif.o 00:01:55.569 CC lib/util/fd.o 00:01:55.569 CC lib/util/fd_group.o 00:01:55.569 CC lib/util/file.o 00:01:55.569 CC lib/util/hexlify.o 00:01:55.569 CC lib/util/iov.o 00:01:55.569 CC lib/util/math.o 00:01:55.569 CC lib/util/net.o 00:01:55.569 CC lib/util/pipe.o 00:01:55.569 CC lib/util/strerror_tls.o 00:01:55.569 CC lib/util/string.o 00:01:55.569 CC lib/util/uuid.o 00:01:55.569 CC lib/util/xor.o 00:01:55.569 CC lib/util/zipf.o 00:01:55.569 CC lib/util/md5.o 00:01:55.569 CC lib/vfio_user/host/vfio_user_pci.o 00:01:55.569 CC lib/vfio_user/host/vfio_user.o 00:01:55.569 LIB libspdk_dma.a 00:01:55.569 SO libspdk_dma.so.5.0 00:01:55.569 LIB libspdk_ioat.a 00:01:55.569 SO libspdk_ioat.so.7.0 00:01:55.569 SYMLINK libspdk_dma.so 00:01:55.569 SYMLINK libspdk_ioat.so 00:01:55.569 LIB libspdk_vfio_user.a 00:01:55.569 SO libspdk_vfio_user.so.5.0 00:01:55.569 LIB libspdk_util.a 00:01:55.569 SYMLINK libspdk_vfio_user.so 00:01:55.569 SO libspdk_util.so.10.1 00:01:55.569 SYMLINK libspdk_util.so 00:01:55.569 LIB libspdk_trace_parser.a 00:01:55.569 SO libspdk_trace_parser.so.6.0 00:01:55.569 SYMLINK libspdk_trace_parser.so 00:01:55.569 CC lib/json/json_parse.o 00:01:55.569 CC lib/json/json_util.o 00:01:55.569 CC lib/json/json_write.o 00:01:55.569 CC lib/idxd/idxd.o 00:01:55.569 CC lib/idxd/idxd_user.o 00:01:55.569 CC lib/vmd/vmd.o 00:01:55.569 CC lib/conf/conf.o 00:01:55.569 CC lib/idxd/idxd_kernel.o 00:01:55.569 CC lib/vmd/led.o 00:01:55.569 CC lib/env_dpdk/env.o 00:01:55.569 CC lib/env_dpdk/memory.o 00:01:55.569 CC lib/env_dpdk/pci.o 00:01:55.569 CC lib/env_dpdk/init.o 00:01:55.569 CC lib/rdma_utils/rdma_utils.o 00:01:55.569 CC lib/env_dpdk/threads.o 00:01:55.569 CC lib/env_dpdk/pci_ioat.o 00:01:55.569 CC lib/env_dpdk/pci_virtio.o 00:01:55.569 CC lib/env_dpdk/pci_vmd.o 00:01:55.569 CC lib/env_dpdk/pci_idxd.o 00:01:55.569 CC lib/env_dpdk/pci_event.o 00:01:55.569 CC lib/env_dpdk/sigbus_handler.o 00:01:55.569 CC lib/env_dpdk/pci_dpdk.o 00:01:55.569 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:55.569 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:55.569 LIB libspdk_conf.a 00:01:55.569 LIB libspdk_json.a 00:01:55.569 SO libspdk_conf.so.6.0 00:01:55.569 LIB libspdk_rdma_utils.a 00:01:55.569 SO libspdk_json.so.6.0 00:01:55.569 SO libspdk_rdma_utils.so.1.0 00:01:55.569 SYMLINK libspdk_conf.so 00:01:55.828 SYMLINK libspdk_json.so 00:01:55.828 SYMLINK libspdk_rdma_utils.so 00:01:55.828 LIB libspdk_idxd.a 00:01:55.828 SO libspdk_idxd.so.12.1 00:01:55.828 LIB libspdk_vmd.a 00:01:55.828 SYMLINK libspdk_idxd.so 00:01:55.828 SO libspdk_vmd.so.6.0 00:01:56.086 SYMLINK libspdk_vmd.so 00:01:56.086 CC lib/jsonrpc/jsonrpc_server.o 00:01:56.087 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:01:56.087 CC lib/jsonrpc/jsonrpc_client.o 00:01:56.087 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:01:56.087 CC lib/rdma_provider/common.o 00:01:56.087 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:56.087 LIB libspdk_jsonrpc.a 00:01:56.346 LIB libspdk_rdma_provider.a 00:01:56.346 SO libspdk_rdma_provider.so.7.0 00:01:56.346 SO libspdk_jsonrpc.so.6.0 00:01:56.346 SYMLINK libspdk_rdma_provider.so 00:01:56.346 SYMLINK libspdk_jsonrpc.so 00:01:56.346 LIB libspdk_env_dpdk.a 00:01:56.605 SO libspdk_env_dpdk.so.15.1 00:01:56.605 SYMLINK libspdk_env_dpdk.so 00:01:56.605 CC lib/rpc/rpc.o 00:01:56.864 LIB libspdk_rpc.a 00:01:56.864 SO libspdk_rpc.so.6.0 00:01:56.864 SYMLINK libspdk_rpc.so 00:01:57.123 CC lib/keyring/keyring.o 00:01:57.123 CC lib/trace/trace.o 00:01:57.123 CC lib/keyring/keyring_rpc.o 00:01:57.123 CC lib/notify/notify.o 00:01:57.123 CC lib/trace/trace_flags.o 00:01:57.123 CC lib/notify/notify_rpc.o 00:01:57.123 CC lib/trace/trace_rpc.o 00:01:57.383 LIB libspdk_notify.a 00:01:57.383 LIB libspdk_keyring.a 00:01:57.383 SO libspdk_notify.so.6.0 00:01:57.383 LIB libspdk_trace.a 00:01:57.383 SO libspdk_keyring.so.2.0 00:01:57.383 SYMLINK libspdk_notify.so 00:01:57.383 SO libspdk_trace.so.11.0 00:01:57.383 SYMLINK libspdk_keyring.so 00:01:57.643 SYMLINK libspdk_trace.so 00:01:57.903 CC lib/sock/sock.o 00:01:57.903 CC lib/sock/sock_rpc.o 00:01:57.903 CC lib/thread/thread.o 00:01:57.903 CC lib/thread/iobuf.o 00:01:58.162 LIB libspdk_sock.a 00:01:58.162 SO libspdk_sock.so.10.0 00:01:58.162 SYMLINK libspdk_sock.so 00:01:58.730 CC lib/nvme/nvme_ctrlr_cmd.o 00:01:58.730 CC lib/nvme/nvme_ctrlr.o 00:01:58.730 CC lib/nvme/nvme_fabric.o 00:01:58.730 CC lib/nvme/nvme_ns_cmd.o 00:01:58.730 CC lib/nvme/nvme_ns.o 00:01:58.730 CC lib/nvme/nvme_pcie_common.o 00:01:58.730 CC lib/nvme/nvme_pcie.o 00:01:58.730 CC lib/nvme/nvme_qpair.o 00:01:58.730 CC lib/nvme/nvme.o 00:01:58.730 CC lib/nvme/nvme_quirks.o 00:01:58.730 CC lib/nvme/nvme_transport.o 00:01:58.730 CC lib/nvme/nvme_discovery.o 00:01:58.730 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:01:58.730 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:01:58.730 CC lib/nvme/nvme_tcp.o 00:01:58.730 CC lib/nvme/nvme_opal.o 00:01:58.730 CC lib/nvme/nvme_io_msg.o 00:01:58.730 CC lib/nvme/nvme_poll_group.o 00:01:58.730 CC lib/nvme/nvme_zns.o 00:01:58.730 CC lib/nvme/nvme_stubs.o 00:01:58.730 CC lib/nvme/nvme_auth.o 00:01:58.730 CC lib/nvme/nvme_cuse.o 00:01:58.730 CC lib/nvme/nvme_vfio_user.o 00:01:58.730 CC lib/nvme/nvme_rdma.o 00:01:58.988 LIB libspdk_thread.a 00:01:58.988 SO libspdk_thread.so.11.0 00:01:58.988 SYMLINK libspdk_thread.so 00:01:59.248 CC lib/init/json_config.o 00:01:59.248 CC lib/init/subsystem.o 00:01:59.248 CC lib/init/subsystem_rpc.o 00:01:59.248 CC lib/init/rpc.o 00:01:59.248 CC lib/vfu_tgt/tgt_endpoint.o 00:01:59.248 CC lib/vfu_tgt/tgt_rpc.o 00:01:59.248 CC lib/virtio/virtio.o 00:01:59.248 CC lib/virtio/virtio_vhost_user.o 00:01:59.248 CC lib/virtio/virtio_vfio_user.o 00:01:59.248 CC lib/virtio/virtio_pci.o 00:01:59.248 CC lib/fsdev/fsdev.o 00:01:59.248 CC lib/fsdev/fsdev_io.o 00:01:59.248 CC lib/accel/accel.o 00:01:59.248 CC lib/blob/request.o 00:01:59.248 CC lib/blob/blobstore.o 00:01:59.248 CC lib/fsdev/fsdev_rpc.o 00:01:59.248 CC lib/blob/zeroes.o 00:01:59.248 CC lib/accel/accel_rpc.o 00:01:59.248 CC lib/blob/blob_bs_dev.o 00:01:59.248 CC lib/accel/accel_sw.o 00:01:59.507 LIB libspdk_init.a 00:01:59.507 SO libspdk_init.so.6.0 00:01:59.507 LIB libspdk_vfu_tgt.a 00:01:59.507 LIB libspdk_virtio.a 00:01:59.507 SO libspdk_virtio.so.7.0 00:01:59.507 SO libspdk_vfu_tgt.so.3.0 00:01:59.507 SYMLINK libspdk_init.so 00:01:59.766 SYMLINK libspdk_vfu_tgt.so 00:01:59.766 SYMLINK libspdk_virtio.so 00:01:59.766 LIB libspdk_fsdev.a 00:01:59.766 SO libspdk_fsdev.so.2.0 00:02:00.025 CC lib/event/app.o 00:02:00.025 CC lib/event/reactor.o 00:02:00.025 CC lib/event/log_rpc.o 00:02:00.025 CC lib/event/app_rpc.o 00:02:00.025 CC lib/event/scheduler_static.o 00:02:00.025 SYMLINK libspdk_fsdev.so 00:02:00.025 LIB libspdk_accel.a 00:02:00.284 SO libspdk_accel.so.16.0 00:02:00.284 LIB libspdk_nvme.a 00:02:00.284 SYMLINK libspdk_accel.so 00:02:00.284 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:00.284 LIB libspdk_event.a 00:02:00.284 SO libspdk_nvme.so.15.0 00:02:00.284 SO libspdk_event.so.14.0 00:02:00.284 SYMLINK libspdk_event.so 00:02:00.543 SYMLINK libspdk_nvme.so 00:02:00.543 CC lib/bdev/bdev.o 00:02:00.543 CC lib/bdev/bdev_rpc.o 00:02:00.543 CC lib/bdev/bdev_zone.o 00:02:00.543 CC lib/bdev/part.o 00:02:00.543 CC lib/bdev/scsi_nvme.o 00:02:00.802 LIB libspdk_fuse_dispatcher.a 00:02:00.802 SO libspdk_fuse_dispatcher.so.1.0 00:02:00.802 SYMLINK libspdk_fuse_dispatcher.so 00:02:01.370 LIB libspdk_blob.a 00:02:01.629 SO libspdk_blob.so.11.0 00:02:01.629 SYMLINK libspdk_blob.so 00:02:01.888 CC lib/lvol/lvol.o 00:02:01.888 CC lib/blobfs/blobfs.o 00:02:01.888 CC lib/blobfs/tree.o 00:02:02.457 LIB libspdk_bdev.a 00:02:02.457 SO libspdk_bdev.so.17.0 00:02:02.457 LIB libspdk_blobfs.a 00:02:02.457 SYMLINK libspdk_bdev.so 00:02:02.457 SO libspdk_blobfs.so.10.0 00:02:02.457 LIB libspdk_lvol.a 00:02:02.457 SYMLINK libspdk_blobfs.so 00:02:02.717 SO libspdk_lvol.so.10.0 00:02:02.717 SYMLINK libspdk_lvol.so 00:02:02.717 CC lib/nbd/nbd.o 00:02:02.717 CC lib/nbd/nbd_rpc.o 00:02:02.717 CC lib/ublk/ublk.o 00:02:02.717 CC lib/ublk/ublk_rpc.o 00:02:02.717 CC lib/nvmf/ctrlr.o 00:02:02.717 CC lib/nvmf/ctrlr_discovery.o 00:02:02.717 CC lib/nvmf/ctrlr_bdev.o 00:02:02.717 CC lib/nvmf/subsystem.o 00:02:02.717 CC lib/nvmf/nvmf.o 00:02:02.717 CC lib/nvmf/nvmf_rpc.o 00:02:02.717 CC lib/nvmf/transport.o 00:02:02.717 CC lib/scsi/dev.o 00:02:02.717 CC lib/nvmf/tcp.o 00:02:02.717 CC lib/scsi/lun.o 00:02:02.717 CC lib/scsi/port.o 00:02:02.717 CC lib/ftl/ftl_core.o 00:02:02.717 CC lib/nvmf/stubs.o 00:02:02.717 CC lib/nvmf/mdns_server.o 00:02:02.717 CC lib/scsi/scsi.o 00:02:02.717 CC lib/ftl/ftl_init.o 00:02:02.717 CC lib/nvmf/vfio_user.o 00:02:02.717 CC lib/ftl/ftl_layout.o 00:02:02.717 CC lib/scsi/scsi_bdev.o 00:02:02.717 CC lib/scsi/scsi_pr.o 00:02:02.717 CC lib/nvmf/rdma.o 00:02:02.717 CC lib/ftl/ftl_debug.o 00:02:02.717 CC lib/nvmf/auth.o 00:02:02.717 CC lib/scsi/scsi_rpc.o 00:02:02.717 CC lib/ftl/ftl_sb.o 00:02:02.717 CC lib/ftl/ftl_io.o 00:02:02.717 CC lib/scsi/task.o 00:02:02.717 CC lib/ftl/ftl_l2p.o 00:02:02.717 CC lib/ftl/ftl_l2p_flat.o 00:02:02.717 CC lib/ftl/ftl_nv_cache.o 00:02:02.717 CC lib/ftl/ftl_band.o 00:02:02.717 CC lib/ftl/ftl_band_ops.o 00:02:02.717 CC lib/ftl/ftl_writer.o 00:02:02.717 CC lib/ftl/ftl_rq.o 00:02:02.976 CC lib/ftl/ftl_reloc.o 00:02:02.976 CC lib/ftl/ftl_l2p_cache.o 00:02:02.976 CC lib/ftl/ftl_p2l.o 00:02:02.976 CC lib/ftl/mngt/ftl_mngt.o 00:02:02.976 CC lib/ftl/ftl_p2l_log.o 00:02:02.976 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:02.976 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:02.976 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:02.976 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:02.976 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:02.976 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:02.976 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:02.976 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:02.976 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:02.976 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:02.976 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:02.976 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:02.976 CC lib/ftl/utils/ftl_conf.o 00:02:02.976 CC lib/ftl/utils/ftl_md.o 00:02:02.976 CC lib/ftl/utils/ftl_property.o 00:02:02.976 CC lib/ftl/utils/ftl_mempool.o 00:02:02.976 CC lib/ftl/utils/ftl_bitmap.o 00:02:02.976 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:02.976 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:02.976 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:02.976 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:02.976 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:02.976 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:02.976 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:02.976 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:02.976 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:02.976 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:02.976 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:02.976 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:02.976 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:02.976 CC lib/ftl/ftl_trace.o 00:02:02.976 CC lib/ftl/base/ftl_base_dev.o 00:02:02.976 CC lib/ftl/base/ftl_base_bdev.o 00:02:03.542 LIB libspdk_nbd.a 00:02:03.542 SO libspdk_nbd.so.7.0 00:02:03.542 SYMLINK libspdk_nbd.so 00:02:03.542 LIB libspdk_scsi.a 00:02:03.542 LIB libspdk_ublk.a 00:02:03.542 SO libspdk_ublk.so.3.0 00:02:03.542 SO libspdk_scsi.so.9.0 00:02:03.542 SYMLINK libspdk_ublk.so 00:02:03.542 SYMLINK libspdk_scsi.so 00:02:03.802 LIB libspdk_ftl.a 00:02:03.802 CC lib/vhost/vhost.o 00:02:03.802 CC lib/vhost/vhost_rpc.o 00:02:03.802 CC lib/vhost/vhost_blk.o 00:02:03.802 CC lib/vhost/vhost_scsi.o 00:02:03.802 CC lib/iscsi/conn.o 00:02:03.802 CC lib/vhost/rte_vhost_user.o 00:02:03.802 CC lib/iscsi/init_grp.o 00:02:03.802 CC lib/iscsi/iscsi.o 00:02:03.802 CC lib/iscsi/param.o 00:02:03.802 CC lib/iscsi/portal_grp.o 00:02:03.802 CC lib/iscsi/tgt_node.o 00:02:03.802 CC lib/iscsi/iscsi_subsystem.o 00:02:03.802 CC lib/iscsi/iscsi_rpc.o 00:02:03.802 CC lib/iscsi/task.o 00:02:03.802 SO libspdk_ftl.so.9.0 00:02:04.062 SYMLINK libspdk_ftl.so 00:02:04.629 LIB libspdk_nvmf.a 00:02:04.629 SO libspdk_nvmf.so.20.0 00:02:04.629 LIB libspdk_vhost.a 00:02:04.888 SO libspdk_vhost.so.8.0 00:02:04.888 SYMLINK libspdk_nvmf.so 00:02:04.888 SYMLINK libspdk_vhost.so 00:02:04.888 LIB libspdk_iscsi.a 00:02:04.888 SO libspdk_iscsi.so.8.0 00:02:05.147 SYMLINK libspdk_iscsi.so 00:02:05.716 CC module/env_dpdk/env_dpdk_rpc.o 00:02:05.716 CC module/vfu_device/vfu_virtio.o 00:02:05.716 CC module/vfu_device/vfu_virtio_blk.o 00:02:05.716 CC module/vfu_device/vfu_virtio_scsi.o 00:02:05.716 CC module/vfu_device/vfu_virtio_fs.o 00:02:05.716 CC module/vfu_device/vfu_virtio_rpc.o 00:02:05.716 CC module/accel/dsa/accel_dsa.o 00:02:05.716 CC module/keyring/file/keyring_rpc.o 00:02:05.716 CC module/keyring/file/keyring.o 00:02:05.716 CC module/accel/dsa/accel_dsa_rpc.o 00:02:05.716 CC module/keyring/linux/keyring.o 00:02:05.716 CC module/keyring/linux/keyring_rpc.o 00:02:05.716 CC module/sock/posix/posix.o 00:02:05.716 CC module/fsdev/aio/fsdev_aio.o 00:02:05.716 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:05.716 LIB libspdk_env_dpdk_rpc.a 00:02:05.716 CC module/fsdev/aio/linux_aio_mgr.o 00:02:05.716 CC module/blob/bdev/blob_bdev.o 00:02:05.716 CC module/accel/error/accel_error.o 00:02:05.716 CC module/accel/error/accel_error_rpc.o 00:02:05.716 CC module/accel/iaa/accel_iaa.o 00:02:05.716 CC module/scheduler/gscheduler/gscheduler.o 00:02:05.716 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:05.716 CC module/accel/ioat/accel_ioat.o 00:02:05.716 CC module/accel/iaa/accel_iaa_rpc.o 00:02:05.716 CC module/accel/ioat/accel_ioat_rpc.o 00:02:05.716 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:05.716 SO libspdk_env_dpdk_rpc.so.6.0 00:02:05.975 SYMLINK libspdk_env_dpdk_rpc.so 00:02:05.975 LIB libspdk_keyring_linux.a 00:02:05.975 LIB libspdk_keyring_file.a 00:02:05.975 LIB libspdk_scheduler_gscheduler.a 00:02:05.975 LIB libspdk_scheduler_dpdk_governor.a 00:02:05.975 SO libspdk_scheduler_gscheduler.so.4.0 00:02:05.975 SO libspdk_keyring_file.so.2.0 00:02:05.975 SO libspdk_keyring_linux.so.1.0 00:02:05.975 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:05.975 LIB libspdk_accel_error.a 00:02:05.975 LIB libspdk_accel_ioat.a 00:02:05.975 LIB libspdk_accel_iaa.a 00:02:05.975 LIB libspdk_scheduler_dynamic.a 00:02:05.975 SYMLINK libspdk_scheduler_gscheduler.so 00:02:05.975 SO libspdk_accel_error.so.2.0 00:02:05.975 SYMLINK libspdk_keyring_linux.so 00:02:05.975 SO libspdk_accel_ioat.so.6.0 00:02:05.975 SO libspdk_accel_iaa.so.3.0 00:02:05.975 SO libspdk_scheduler_dynamic.so.4.0 00:02:05.975 SYMLINK libspdk_keyring_file.so 00:02:05.975 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:05.975 LIB libspdk_blob_bdev.a 00:02:05.975 LIB libspdk_accel_dsa.a 00:02:05.975 SO libspdk_blob_bdev.so.11.0 00:02:05.975 SYMLINK libspdk_accel_error.so 00:02:05.975 SYMLINK libspdk_accel_ioat.so 00:02:05.975 SO libspdk_accel_dsa.so.5.0 00:02:05.975 SYMLINK libspdk_scheduler_dynamic.so 00:02:05.975 SYMLINK libspdk_accel_iaa.so 00:02:06.235 SYMLINK libspdk_blob_bdev.so 00:02:06.235 LIB libspdk_vfu_device.a 00:02:06.235 SYMLINK libspdk_accel_dsa.so 00:02:06.235 SO libspdk_vfu_device.so.3.0 00:02:06.235 SYMLINK libspdk_vfu_device.so 00:02:06.235 LIB libspdk_fsdev_aio.a 00:02:06.235 SO libspdk_fsdev_aio.so.1.0 00:02:06.235 LIB libspdk_sock_posix.a 00:02:06.494 SO libspdk_sock_posix.so.6.0 00:02:06.494 SYMLINK libspdk_fsdev_aio.so 00:02:06.494 SYMLINK libspdk_sock_posix.so 00:02:06.494 CC module/bdev/lvol/vbdev_lvol.o 00:02:06.494 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:06.494 CC module/bdev/error/vbdev_error.o 00:02:06.494 CC module/bdev/error/vbdev_error_rpc.o 00:02:06.494 CC module/blobfs/bdev/blobfs_bdev.o 00:02:06.494 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:06.494 CC module/bdev/delay/vbdev_delay.o 00:02:06.494 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:06.494 CC module/bdev/null/bdev_null.o 00:02:06.494 CC module/bdev/malloc/bdev_malloc.o 00:02:06.494 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:06.494 CC module/bdev/null/bdev_null_rpc.o 00:02:06.494 CC module/bdev/nvme/bdev_nvme.o 00:02:06.494 CC module/bdev/nvme/nvme_rpc.o 00:02:06.494 CC module/bdev/nvme/bdev_mdns_client.o 00:02:06.494 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:06.494 CC module/bdev/nvme/vbdev_opal.o 00:02:06.494 CC module/bdev/aio/bdev_aio.o 00:02:06.494 CC module/bdev/passthru/vbdev_passthru.o 00:02:06.494 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:06.494 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:06.494 CC module/bdev/gpt/gpt.o 00:02:06.494 CC module/bdev/gpt/vbdev_gpt.o 00:02:06.494 CC module/bdev/aio/bdev_aio_rpc.o 00:02:06.494 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:06.494 CC module/bdev/ftl/bdev_ftl.o 00:02:06.494 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:06.494 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:06.494 CC module/bdev/raid/bdev_raid.o 00:02:06.494 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:06.494 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:06.494 CC module/bdev/raid/bdev_raid_rpc.o 00:02:06.494 CC module/bdev/split/vbdev_split.o 00:02:06.494 CC module/bdev/raid/bdev_raid_sb.o 00:02:06.494 CC module/bdev/split/vbdev_split_rpc.o 00:02:06.494 CC module/bdev/raid/raid0.o 00:02:06.494 CC module/bdev/iscsi/bdev_iscsi.o 00:02:06.494 CC module/bdev/raid/raid1.o 00:02:06.494 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:06.494 CC module/bdev/raid/concat.o 00:02:06.494 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:06.494 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:06.752 LIB libspdk_blobfs_bdev.a 00:02:06.752 SO libspdk_blobfs_bdev.so.6.0 00:02:06.752 LIB libspdk_bdev_error.a 00:02:06.752 LIB libspdk_bdev_gpt.a 00:02:07.010 LIB libspdk_bdev_null.a 00:02:07.010 SO libspdk_bdev_error.so.6.0 00:02:07.010 LIB libspdk_bdev_split.a 00:02:07.010 SO libspdk_bdev_gpt.so.6.0 00:02:07.010 LIB libspdk_bdev_ftl.a 00:02:07.010 SYMLINK libspdk_blobfs_bdev.so 00:02:07.010 SO libspdk_bdev_null.so.6.0 00:02:07.010 SO libspdk_bdev_split.so.6.0 00:02:07.010 LIB libspdk_bdev_passthru.a 00:02:07.010 SO libspdk_bdev_ftl.so.6.0 00:02:07.010 LIB libspdk_bdev_aio.a 00:02:07.010 SYMLINK libspdk_bdev_gpt.so 00:02:07.010 LIB libspdk_bdev_zone_block.a 00:02:07.010 SYMLINK libspdk_bdev_error.so 00:02:07.010 LIB libspdk_bdev_malloc.a 00:02:07.010 SO libspdk_bdev_aio.so.6.0 00:02:07.010 SO libspdk_bdev_zone_block.so.6.0 00:02:07.010 SO libspdk_bdev_passthru.so.6.0 00:02:07.011 SYMLINK libspdk_bdev_null.so 00:02:07.011 SO libspdk_bdev_malloc.so.6.0 00:02:07.011 SYMLINK libspdk_bdev_split.so 00:02:07.011 LIB libspdk_bdev_delay.a 00:02:07.011 LIB libspdk_bdev_iscsi.a 00:02:07.011 SYMLINK libspdk_bdev_ftl.so 00:02:07.011 SYMLINK libspdk_bdev_aio.so 00:02:07.011 LIB libspdk_bdev_lvol.a 00:02:07.011 SO libspdk_bdev_iscsi.so.6.0 00:02:07.011 SO libspdk_bdev_delay.so.6.0 00:02:07.011 SYMLINK libspdk_bdev_zone_block.so 00:02:07.011 SYMLINK libspdk_bdev_passthru.so 00:02:07.011 SYMLINK libspdk_bdev_malloc.so 00:02:07.011 SO libspdk_bdev_lvol.so.6.0 00:02:07.011 SYMLINK libspdk_bdev_iscsi.so 00:02:07.011 SYMLINK libspdk_bdev_delay.so 00:02:07.011 LIB libspdk_bdev_virtio.a 00:02:07.011 SYMLINK libspdk_bdev_lvol.so 00:02:07.269 SO libspdk_bdev_virtio.so.6.0 00:02:07.269 SYMLINK libspdk_bdev_virtio.so 00:02:07.528 LIB libspdk_bdev_raid.a 00:02:07.528 SO libspdk_bdev_raid.so.6.0 00:02:07.528 SYMLINK libspdk_bdev_raid.so 00:02:08.466 LIB libspdk_bdev_nvme.a 00:02:08.466 SO libspdk_bdev_nvme.so.7.1 00:02:08.726 SYMLINK libspdk_bdev_nvme.so 00:02:09.294 CC module/event/subsystems/iobuf/iobuf.o 00:02:09.294 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:09.294 CC module/event/subsystems/vmd/vmd.o 00:02:09.294 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:09.294 CC module/event/subsystems/keyring/keyring.o 00:02:09.294 CC module/event/subsystems/scheduler/scheduler.o 00:02:09.294 CC module/event/subsystems/sock/sock.o 00:02:09.294 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:09.294 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:09.294 CC module/event/subsystems/fsdev/fsdev.o 00:02:09.294 LIB libspdk_event_scheduler.a 00:02:09.294 LIB libspdk_event_vmd.a 00:02:09.294 LIB libspdk_event_keyring.a 00:02:09.294 LIB libspdk_event_vfu_tgt.a 00:02:09.294 LIB libspdk_event_sock.a 00:02:09.294 LIB libspdk_event_iobuf.a 00:02:09.294 LIB libspdk_event_fsdev.a 00:02:09.294 LIB libspdk_event_vhost_blk.a 00:02:09.294 SO libspdk_event_scheduler.so.4.0 00:02:09.294 SO libspdk_event_sock.so.5.0 00:02:09.553 SO libspdk_event_vmd.so.6.0 00:02:09.553 SO libspdk_event_keyring.so.1.0 00:02:09.553 SO libspdk_event_vfu_tgt.so.3.0 00:02:09.553 SO libspdk_event_fsdev.so.1.0 00:02:09.553 SO libspdk_event_iobuf.so.3.0 00:02:09.553 SO libspdk_event_vhost_blk.so.3.0 00:02:09.553 SYMLINK libspdk_event_scheduler.so 00:02:09.553 SYMLINK libspdk_event_vfu_tgt.so 00:02:09.553 SYMLINK libspdk_event_keyring.so 00:02:09.553 SYMLINK libspdk_event_sock.so 00:02:09.553 SYMLINK libspdk_event_vmd.so 00:02:09.553 SYMLINK libspdk_event_iobuf.so 00:02:09.553 SYMLINK libspdk_event_fsdev.so 00:02:09.553 SYMLINK libspdk_event_vhost_blk.so 00:02:09.812 CC module/event/subsystems/accel/accel.o 00:02:09.812 LIB libspdk_event_accel.a 00:02:10.072 SO libspdk_event_accel.so.6.0 00:02:10.072 SYMLINK libspdk_event_accel.so 00:02:10.331 CC module/event/subsystems/bdev/bdev.o 00:02:10.591 LIB libspdk_event_bdev.a 00:02:10.591 SO libspdk_event_bdev.so.6.0 00:02:10.591 SYMLINK libspdk_event_bdev.so 00:02:10.850 CC module/event/subsystems/scsi/scsi.o 00:02:10.850 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:10.850 CC module/event/subsystems/nbd/nbd.o 00:02:10.850 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:10.850 CC module/event/subsystems/ublk/ublk.o 00:02:11.109 LIB libspdk_event_scsi.a 00:02:11.109 LIB libspdk_event_nbd.a 00:02:11.109 LIB libspdk_event_ublk.a 00:02:11.109 SO libspdk_event_scsi.so.6.0 00:02:11.109 SO libspdk_event_nbd.so.6.0 00:02:11.109 SO libspdk_event_ublk.so.3.0 00:02:11.109 LIB libspdk_event_nvmf.a 00:02:11.109 SYMLINK libspdk_event_scsi.so 00:02:11.109 SYMLINK libspdk_event_nbd.so 00:02:11.109 SO libspdk_event_nvmf.so.6.0 00:02:11.109 SYMLINK libspdk_event_ublk.so 00:02:11.109 SYMLINK libspdk_event_nvmf.so 00:02:11.368 CC module/event/subsystems/iscsi/iscsi.o 00:02:11.368 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:11.628 LIB libspdk_event_vhost_scsi.a 00:02:11.628 LIB libspdk_event_iscsi.a 00:02:11.628 SO libspdk_event_vhost_scsi.so.3.0 00:02:11.628 SO libspdk_event_iscsi.so.6.0 00:02:11.628 SYMLINK libspdk_event_vhost_scsi.so 00:02:11.628 SYMLINK libspdk_event_iscsi.so 00:02:11.888 SO libspdk.so.6.0 00:02:11.888 SYMLINK libspdk.so 00:02:12.147 TEST_HEADER include/spdk/accel.h 00:02:12.147 TEST_HEADER include/spdk/accel_module.h 00:02:12.147 TEST_HEADER include/spdk/assert.h 00:02:12.147 TEST_HEADER include/spdk/base64.h 00:02:12.147 TEST_HEADER include/spdk/bdev.h 00:02:12.147 TEST_HEADER include/spdk/barrier.h 00:02:12.147 TEST_HEADER include/spdk/bdev_module.h 00:02:12.147 TEST_HEADER include/spdk/bdev_zone.h 00:02:12.147 TEST_HEADER include/spdk/bit_array.h 00:02:12.147 TEST_HEADER include/spdk/bit_pool.h 00:02:12.147 TEST_HEADER include/spdk/blob_bdev.h 00:02:12.147 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:12.147 CC app/spdk_lspci/spdk_lspci.o 00:02:12.147 CC app/spdk_top/spdk_top.o 00:02:12.147 TEST_HEADER include/spdk/blobfs.h 00:02:12.147 CC app/spdk_nvme_perf/perf.o 00:02:12.147 CXX app/trace/trace.o 00:02:12.147 TEST_HEADER include/spdk/blob.h 00:02:12.147 CC app/spdk_nvme_discover/discovery_aer.o 00:02:12.147 TEST_HEADER include/spdk/config.h 00:02:12.147 TEST_HEADER include/spdk/conf.h 00:02:12.147 TEST_HEADER include/spdk/cpuset.h 00:02:12.147 TEST_HEADER include/spdk/crc16.h 00:02:12.147 TEST_HEADER include/spdk/crc32.h 00:02:12.147 TEST_HEADER include/spdk/crc64.h 00:02:12.147 TEST_HEADER include/spdk/dif.h 00:02:12.147 TEST_HEADER include/spdk/endian.h 00:02:12.147 TEST_HEADER include/spdk/dma.h 00:02:12.147 TEST_HEADER include/spdk/env_dpdk.h 00:02:12.147 TEST_HEADER include/spdk/env.h 00:02:12.147 TEST_HEADER include/spdk/event.h 00:02:12.147 TEST_HEADER include/spdk/fd_group.h 00:02:12.147 CC test/rpc_client/rpc_client_test.o 00:02:12.147 TEST_HEADER include/spdk/fd.h 00:02:12.147 TEST_HEADER include/spdk/fsdev.h 00:02:12.147 TEST_HEADER include/spdk/file.h 00:02:12.147 CC app/trace_record/trace_record.o 00:02:12.147 TEST_HEADER include/spdk/ftl.h 00:02:12.148 TEST_HEADER include/spdk/fsdev_module.h 00:02:12.148 CC app/spdk_nvme_identify/identify.o 00:02:12.148 TEST_HEADER include/spdk/fuse_dispatcher.h 00:02:12.148 TEST_HEADER include/spdk/hexlify.h 00:02:12.148 TEST_HEADER include/spdk/gpt_spec.h 00:02:12.148 TEST_HEADER include/spdk/histogram_data.h 00:02:12.148 TEST_HEADER include/spdk/idxd_spec.h 00:02:12.148 TEST_HEADER include/spdk/idxd.h 00:02:12.148 TEST_HEADER include/spdk/init.h 00:02:12.148 TEST_HEADER include/spdk/ioat.h 00:02:12.148 TEST_HEADER include/spdk/iscsi_spec.h 00:02:12.148 TEST_HEADER include/spdk/ioat_spec.h 00:02:12.148 TEST_HEADER include/spdk/jsonrpc.h 00:02:12.148 TEST_HEADER include/spdk/keyring.h 00:02:12.148 TEST_HEADER include/spdk/json.h 00:02:12.148 TEST_HEADER include/spdk/keyring_module.h 00:02:12.148 TEST_HEADER include/spdk/likely.h 00:02:12.148 TEST_HEADER include/spdk/log.h 00:02:12.148 TEST_HEADER include/spdk/lvol.h 00:02:12.148 TEST_HEADER include/spdk/md5.h 00:02:12.148 TEST_HEADER include/spdk/mmio.h 00:02:12.148 TEST_HEADER include/spdk/memory.h 00:02:12.148 TEST_HEADER include/spdk/nbd.h 00:02:12.148 TEST_HEADER include/spdk/notify.h 00:02:12.148 TEST_HEADER include/spdk/net.h 00:02:12.148 TEST_HEADER include/spdk/nvme.h 00:02:12.148 TEST_HEADER include/spdk/nvme_intel.h 00:02:12.148 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:12.148 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:12.148 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:12.148 TEST_HEADER include/spdk/nvme_spec.h 00:02:12.148 TEST_HEADER include/spdk/nvme_zns.h 00:02:12.148 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:12.148 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:12.148 TEST_HEADER include/spdk/nvmf.h 00:02:12.148 TEST_HEADER include/spdk/nvmf_spec.h 00:02:12.148 TEST_HEADER include/spdk/nvmf_transport.h 00:02:12.148 CC app/nvmf_tgt/nvmf_main.o 00:02:12.148 TEST_HEADER include/spdk/opal.h 00:02:12.148 TEST_HEADER include/spdk/opal_spec.h 00:02:12.148 TEST_HEADER include/spdk/pipe.h 00:02:12.148 TEST_HEADER include/spdk/pci_ids.h 00:02:12.148 TEST_HEADER include/spdk/queue.h 00:02:12.148 TEST_HEADER include/spdk/scheduler.h 00:02:12.148 TEST_HEADER include/spdk/reduce.h 00:02:12.148 TEST_HEADER include/spdk/rpc.h 00:02:12.148 TEST_HEADER include/spdk/scsi_spec.h 00:02:12.148 TEST_HEADER include/spdk/scsi.h 00:02:12.148 TEST_HEADER include/spdk/sock.h 00:02:12.148 TEST_HEADER include/spdk/stdinc.h 00:02:12.148 TEST_HEADER include/spdk/string.h 00:02:12.148 TEST_HEADER include/spdk/thread.h 00:02:12.148 CC app/spdk_dd/spdk_dd.o 00:02:12.148 TEST_HEADER include/spdk/trace.h 00:02:12.148 TEST_HEADER include/spdk/tree.h 00:02:12.148 TEST_HEADER include/spdk/trace_parser.h 00:02:12.148 TEST_HEADER include/spdk/ublk.h 00:02:12.148 TEST_HEADER include/spdk/util.h 00:02:12.148 TEST_HEADER include/spdk/version.h 00:02:12.148 TEST_HEADER include/spdk/uuid.h 00:02:12.148 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:12.148 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:12.148 TEST_HEADER include/spdk/vhost.h 00:02:12.148 TEST_HEADER include/spdk/vmd.h 00:02:12.148 CC app/iscsi_tgt/iscsi_tgt.o 00:02:12.148 TEST_HEADER include/spdk/xor.h 00:02:12.148 TEST_HEADER include/spdk/zipf.h 00:02:12.148 CXX test/cpp_headers/accel.o 00:02:12.148 CXX test/cpp_headers/accel_module.o 00:02:12.148 CXX test/cpp_headers/barrier.o 00:02:12.148 CXX test/cpp_headers/base64.o 00:02:12.148 CXX test/cpp_headers/bdev.o 00:02:12.148 CXX test/cpp_headers/assert.o 00:02:12.148 CXX test/cpp_headers/bdev_module.o 00:02:12.148 CXX test/cpp_headers/bdev_zone.o 00:02:12.148 CXX test/cpp_headers/bit_array.o 00:02:12.148 CXX test/cpp_headers/bit_pool.o 00:02:12.148 CXX test/cpp_headers/blob_bdev.o 00:02:12.148 CXX test/cpp_headers/blobfs.o 00:02:12.148 CXX test/cpp_headers/blobfs_bdev.o 00:02:12.148 CXX test/cpp_headers/blob.o 00:02:12.148 CXX test/cpp_headers/config.o 00:02:12.148 CXX test/cpp_headers/conf.o 00:02:12.148 CXX test/cpp_headers/crc16.o 00:02:12.148 CXX test/cpp_headers/cpuset.o 00:02:12.148 CXX test/cpp_headers/crc32.o 00:02:12.415 CXX test/cpp_headers/crc64.o 00:02:12.415 CXX test/cpp_headers/dif.o 00:02:12.415 CXX test/cpp_headers/dma.o 00:02:12.415 CXX test/cpp_headers/env_dpdk.o 00:02:12.415 CXX test/cpp_headers/endian.o 00:02:12.415 CXX test/cpp_headers/fd_group.o 00:02:12.415 CXX test/cpp_headers/env.o 00:02:12.415 CXX test/cpp_headers/file.o 00:02:12.415 CXX test/cpp_headers/fd.o 00:02:12.415 CXX test/cpp_headers/event.o 00:02:12.415 CXX test/cpp_headers/fsdev.o 00:02:12.415 CXX test/cpp_headers/fsdev_module.o 00:02:12.415 CC app/spdk_tgt/spdk_tgt.o 00:02:12.415 CXX test/cpp_headers/fuse_dispatcher.o 00:02:12.415 CXX test/cpp_headers/ftl.o 00:02:12.415 CXX test/cpp_headers/gpt_spec.o 00:02:12.415 CXX test/cpp_headers/hexlify.o 00:02:12.415 CXX test/cpp_headers/idxd.o 00:02:12.415 CXX test/cpp_headers/idxd_spec.o 00:02:12.415 CXX test/cpp_headers/histogram_data.o 00:02:12.415 CXX test/cpp_headers/init.o 00:02:12.415 CXX test/cpp_headers/iscsi_spec.o 00:02:12.415 CXX test/cpp_headers/json.o 00:02:12.415 CXX test/cpp_headers/ioat.o 00:02:12.415 CXX test/cpp_headers/ioat_spec.o 00:02:12.415 CXX test/cpp_headers/jsonrpc.o 00:02:12.415 CXX test/cpp_headers/likely.o 00:02:12.415 CXX test/cpp_headers/keyring_module.o 00:02:12.415 CXX test/cpp_headers/keyring.o 00:02:12.415 CXX test/cpp_headers/lvol.o 00:02:12.416 CXX test/cpp_headers/log.o 00:02:12.416 CXX test/cpp_headers/md5.o 00:02:12.416 CXX test/cpp_headers/memory.o 00:02:12.416 CXX test/cpp_headers/nbd.o 00:02:12.416 CXX test/cpp_headers/mmio.o 00:02:12.416 CXX test/cpp_headers/notify.o 00:02:12.416 CXX test/cpp_headers/net.o 00:02:12.416 CXX test/cpp_headers/nvme.o 00:02:12.416 CXX test/cpp_headers/nvme_ocssd.o 00:02:12.416 CXX test/cpp_headers/nvme_intel.o 00:02:12.416 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:12.416 CXX test/cpp_headers/nvme_spec.o 00:02:12.416 CXX test/cpp_headers/nvme_zns.o 00:02:12.416 CXX test/cpp_headers/nvmf_cmd.o 00:02:12.416 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:12.416 CXX test/cpp_headers/nvmf.o 00:02:12.416 CXX test/cpp_headers/nvmf_spec.o 00:02:12.416 CXX test/cpp_headers/nvmf_transport.o 00:02:12.416 CXX test/cpp_headers/opal.o 00:02:12.416 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:12.416 CC examples/util/zipf/zipf.o 00:02:12.416 CC test/thread/poller_perf/poller_perf.o 00:02:12.416 CXX test/cpp_headers/opal_spec.o 00:02:12.416 CC test/app/histogram_perf/histogram_perf.o 00:02:12.416 CC examples/ioat/verify/verify.o 00:02:12.416 CC test/env/pci/pci_ut.o 00:02:12.416 CC test/app/stub/stub.o 00:02:12.416 CC test/app/jsoncat/jsoncat.o 00:02:12.416 CC test/env/memory/memory_ut.o 00:02:12.416 CC test/app/bdev_svc/bdev_svc.o 00:02:12.416 CC test/dma/test_dma/test_dma.o 00:02:12.416 CC examples/ioat/perf/perf.o 00:02:12.416 CC test/env/vtophys/vtophys.o 00:02:12.416 CC app/fio/nvme/fio_plugin.o 00:02:12.416 CC app/fio/bdev/fio_plugin.o 00:02:12.416 LINK spdk_lspci 00:02:12.680 LINK rpc_client_test 00:02:12.680 LINK nvmf_tgt 00:02:12.941 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:12.941 LINK poller_perf 00:02:12.941 LINK histogram_perf 00:02:12.941 LINK env_dpdk_post_init 00:02:12.941 LINK zipf 00:02:12.941 LINK interrupt_tgt 00:02:12.941 LINK spdk_nvme_discover 00:02:12.941 LINK jsoncat 00:02:12.941 CC test/env/mem_callbacks/mem_callbacks.o 00:02:12.941 CXX test/cpp_headers/pci_ids.o 00:02:12.941 CXX test/cpp_headers/pipe.o 00:02:12.941 CXX test/cpp_headers/reduce.o 00:02:12.941 CXX test/cpp_headers/queue.o 00:02:12.941 CXX test/cpp_headers/rpc.o 00:02:12.941 CXX test/cpp_headers/scheduler.o 00:02:12.941 CXX test/cpp_headers/scsi.o 00:02:12.941 CXX test/cpp_headers/scsi_spec.o 00:02:12.941 CXX test/cpp_headers/stdinc.o 00:02:12.941 CXX test/cpp_headers/sock.o 00:02:12.941 CXX test/cpp_headers/string.o 00:02:12.941 CXX test/cpp_headers/thread.o 00:02:12.941 CXX test/cpp_headers/trace.o 00:02:12.941 CXX test/cpp_headers/trace_parser.o 00:02:12.941 CXX test/cpp_headers/tree.o 00:02:12.941 LINK bdev_svc 00:02:12.941 CXX test/cpp_headers/ublk.o 00:02:12.941 CXX test/cpp_headers/util.o 00:02:12.941 CXX test/cpp_headers/uuid.o 00:02:12.941 CXX test/cpp_headers/version.o 00:02:12.941 CXX test/cpp_headers/vfio_user_pci.o 00:02:12.941 CXX test/cpp_headers/vfio_user_spec.o 00:02:12.941 CXX test/cpp_headers/vhost.o 00:02:12.941 CXX test/cpp_headers/vmd.o 00:02:12.941 CXX test/cpp_headers/xor.o 00:02:12.941 CXX test/cpp_headers/zipf.o 00:02:12.941 LINK verify 00:02:12.941 LINK ioat_perf 00:02:12.941 LINK vtophys 00:02:12.941 LINK iscsi_tgt 00:02:12.941 LINK spdk_dd 00:02:12.941 LINK spdk_trace_record 00:02:13.200 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:13.200 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:13.200 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:13.200 LINK spdk_tgt 00:02:13.200 LINK stub 00:02:13.200 LINK spdk_trace 00:02:13.200 LINK pci_ut 00:02:13.458 CC test/event/event_perf/event_perf.o 00:02:13.458 CC test/event/reactor/reactor.o 00:02:13.458 CC examples/sock/hello_world/hello_sock.o 00:02:13.458 LINK nvme_fuzz 00:02:13.458 CC test/event/reactor_perf/reactor_perf.o 00:02:13.458 CC examples/idxd/perf/perf.o 00:02:13.458 CC examples/vmd/lsvmd/lsvmd.o 00:02:13.458 CC examples/vmd/led/led.o 00:02:13.458 LINK spdk_bdev 00:02:13.458 CC test/event/scheduler/scheduler.o 00:02:13.458 CC examples/thread/thread/thread_ex.o 00:02:13.458 CC test/event/app_repeat/app_repeat.o 00:02:13.458 LINK spdk_nvme_identify 00:02:13.458 CC app/vhost/vhost.o 00:02:13.458 LINK spdk_nvme 00:02:13.458 LINK test_dma 00:02:13.458 LINK reactor 00:02:13.458 LINK reactor_perf 00:02:13.458 LINK vhost_fuzz 00:02:13.459 LINK lsvmd 00:02:13.459 LINK event_perf 00:02:13.459 LINK led 00:02:13.459 LINK spdk_top 00:02:13.459 LINK spdk_nvme_perf 00:02:13.718 LINK hello_sock 00:02:13.718 LINK app_repeat 00:02:13.718 LINK mem_callbacks 00:02:13.718 LINK scheduler 00:02:13.718 LINK thread 00:02:13.718 LINK vhost 00:02:13.718 LINK idxd_perf 00:02:13.978 LINK memory_ut 00:02:13.978 CC test/nvme/sgl/sgl.o 00:02:13.978 CC test/nvme/e2edp/nvme_dp.o 00:02:13.978 CC test/nvme/aer/aer.o 00:02:13.978 CC test/nvme/reserve/reserve.o 00:02:13.978 CC test/nvme/cuse/cuse.o 00:02:13.978 CC test/nvme/connect_stress/connect_stress.o 00:02:13.978 CC test/nvme/overhead/overhead.o 00:02:13.978 CC test/nvme/boot_partition/boot_partition.o 00:02:13.978 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:13.978 CC test/nvme/compliance/nvme_compliance.o 00:02:13.978 CC test/nvme/reset/reset.o 00:02:13.978 CC test/nvme/startup/startup.o 00:02:13.978 CC test/nvme/err_injection/err_injection.o 00:02:13.978 CC test/nvme/simple_copy/simple_copy.o 00:02:13.978 CC test/nvme/fdp/fdp.o 00:02:13.978 CC test/nvme/fused_ordering/fused_ordering.o 00:02:13.978 CC test/blobfs/mkfs/mkfs.o 00:02:13.978 CC test/accel/dif/dif.o 00:02:13.978 CC examples/nvme/hello_world/hello_world.o 00:02:13.978 CC examples/nvme/reconnect/reconnect.o 00:02:13.978 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:13.978 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:13.978 CC examples/nvme/arbitration/arbitration.o 00:02:13.978 CC examples/nvme/hotplug/hotplug.o 00:02:13.978 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:13.978 CC examples/nvme/abort/abort.o 00:02:13.978 CC test/lvol/esnap/esnap.o 00:02:14.237 CC examples/accel/perf/accel_perf.o 00:02:14.237 CC examples/blob/cli/blobcli.o 00:02:14.237 LINK boot_partition 00:02:14.237 CC examples/blob/hello_world/hello_blob.o 00:02:14.237 LINK startup 00:02:14.237 LINK doorbell_aers 00:02:14.237 CC examples/fsdev/hello_world/hello_fsdev.o 00:02:14.237 LINK err_injection 00:02:14.237 LINK connect_stress 00:02:14.237 LINK reserve 00:02:14.237 LINK fused_ordering 00:02:14.237 LINK mkfs 00:02:14.237 LINK cmb_copy 00:02:14.237 LINK pmr_persistence 00:02:14.237 LINK reset 00:02:14.237 LINK simple_copy 00:02:14.237 LINK sgl 00:02:14.237 LINK aer 00:02:14.237 LINK nvme_dp 00:02:14.237 LINK hello_world 00:02:14.237 LINK hotplug 00:02:14.237 LINK overhead 00:02:14.237 LINK nvme_compliance 00:02:14.237 LINK fdp 00:02:14.237 LINK arbitration 00:02:14.496 LINK reconnect 00:02:14.496 LINK abort 00:02:14.496 LINK hello_blob 00:02:14.496 LINK hello_fsdev 00:02:14.496 LINK nvme_manage 00:02:14.496 LINK iscsi_fuzz 00:02:14.496 LINK dif 00:02:14.496 LINK accel_perf 00:02:14.496 LINK blobcli 00:02:15.064 LINK cuse 00:02:15.064 CC examples/bdev/bdevperf/bdevperf.o 00:02:15.064 CC examples/bdev/hello_world/hello_bdev.o 00:02:15.064 CC test/bdev/bdevio/bdevio.o 00:02:15.323 LINK hello_bdev 00:02:15.323 LINK bdevio 00:02:15.582 LINK bdevperf 00:02:16.151 CC examples/nvmf/nvmf/nvmf.o 00:02:16.411 LINK nvmf 00:02:17.791 LINK esnap 00:02:17.791 00:02:17.791 real 0m55.388s 00:02:17.791 user 8m0.836s 00:02:17.791 sys 3m43.385s 00:02:18.049 08:46:33 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:02:18.049 08:46:33 make -- common/autotest_common.sh@10 -- $ set +x 00:02:18.049 ************************************ 00:02:18.049 END TEST make 00:02:18.049 ************************************ 00:02:18.049 08:46:33 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:18.049 08:46:33 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:18.049 08:46:33 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:18.049 08:46:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.049 08:46:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:18.049 08:46:33 -- pm/common@44 -- $ pid=2061766 00:02:18.049 08:46:33 -- pm/common@50 -- $ kill -TERM 2061766 00:02:18.049 08:46:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.049 08:46:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:18.049 08:46:33 -- pm/common@44 -- $ pid=2061768 00:02:18.049 08:46:33 -- pm/common@50 -- $ kill -TERM 2061768 00:02:18.049 08:46:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.049 08:46:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:18.049 08:46:33 -- pm/common@44 -- $ pid=2061769 00:02:18.049 08:46:33 -- pm/common@50 -- $ kill -TERM 2061769 00:02:18.049 08:46:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.049 08:46:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:18.049 08:46:33 -- pm/common@44 -- $ pid=2061792 00:02:18.049 08:46:33 -- pm/common@50 -- $ sudo -E kill -TERM 2061792 00:02:18.049 08:46:33 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:02:18.049 08:46:33 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:02:18.049 08:46:33 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:02:18.049 08:46:33 -- common/autotest_common.sh@1693 -- # lcov --version 00:02:18.049 08:46:33 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:02:18.049 08:46:34 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:02:18.049 08:46:34 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:02:18.049 08:46:34 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:02:18.049 08:46:34 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:02:18.049 08:46:34 -- scripts/common.sh@336 -- # IFS=.-: 00:02:18.049 08:46:34 -- scripts/common.sh@336 -- # read -ra ver1 00:02:18.049 08:46:34 -- scripts/common.sh@337 -- # IFS=.-: 00:02:18.049 08:46:34 -- scripts/common.sh@337 -- # read -ra ver2 00:02:18.049 08:46:34 -- scripts/common.sh@338 -- # local 'op=<' 00:02:18.049 08:46:34 -- scripts/common.sh@340 -- # ver1_l=2 00:02:18.049 08:46:34 -- scripts/common.sh@341 -- # ver2_l=1 00:02:18.049 08:46:34 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:02:18.049 08:46:34 -- scripts/common.sh@344 -- # case "$op" in 00:02:18.049 08:46:34 -- scripts/common.sh@345 -- # : 1 00:02:18.049 08:46:34 -- scripts/common.sh@364 -- # (( v = 0 )) 00:02:18.049 08:46:34 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:02:18.049 08:46:34 -- scripts/common.sh@365 -- # decimal 1 00:02:18.049 08:46:34 -- scripts/common.sh@353 -- # local d=1 00:02:18.049 08:46:34 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:02:18.049 08:46:34 -- scripts/common.sh@355 -- # echo 1 00:02:18.049 08:46:34 -- scripts/common.sh@365 -- # ver1[v]=1 00:02:18.049 08:46:34 -- scripts/common.sh@366 -- # decimal 2 00:02:18.049 08:46:34 -- scripts/common.sh@353 -- # local d=2 00:02:18.049 08:46:34 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:02:18.049 08:46:34 -- scripts/common.sh@355 -- # echo 2 00:02:18.049 08:46:34 -- scripts/common.sh@366 -- # ver2[v]=2 00:02:18.049 08:46:34 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:02:18.049 08:46:34 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:02:18.049 08:46:34 -- scripts/common.sh@368 -- # return 0 00:02:18.049 08:46:34 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:02:18.049 08:46:34 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:02:18.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:18.049 --rc genhtml_branch_coverage=1 00:02:18.049 --rc genhtml_function_coverage=1 00:02:18.049 --rc genhtml_legend=1 00:02:18.049 --rc geninfo_all_blocks=1 00:02:18.049 --rc geninfo_unexecuted_blocks=1 00:02:18.049 00:02:18.049 ' 00:02:18.049 08:46:34 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:02:18.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:18.049 --rc genhtml_branch_coverage=1 00:02:18.049 --rc genhtml_function_coverage=1 00:02:18.049 --rc genhtml_legend=1 00:02:18.049 --rc geninfo_all_blocks=1 00:02:18.049 --rc geninfo_unexecuted_blocks=1 00:02:18.049 00:02:18.049 ' 00:02:18.049 08:46:34 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:02:18.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:18.049 --rc genhtml_branch_coverage=1 00:02:18.049 --rc genhtml_function_coverage=1 00:02:18.049 --rc genhtml_legend=1 00:02:18.049 --rc geninfo_all_blocks=1 00:02:18.049 --rc geninfo_unexecuted_blocks=1 00:02:18.049 00:02:18.049 ' 00:02:18.049 08:46:34 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:02:18.049 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:02:18.049 --rc genhtml_branch_coverage=1 00:02:18.049 --rc genhtml_function_coverage=1 00:02:18.049 --rc genhtml_legend=1 00:02:18.049 --rc geninfo_all_blocks=1 00:02:18.049 --rc geninfo_unexecuted_blocks=1 00:02:18.049 00:02:18.049 ' 00:02:18.049 08:46:34 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:18.049 08:46:34 -- nvmf/common.sh@7 -- # uname -s 00:02:18.049 08:46:34 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:18.049 08:46:34 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:18.049 08:46:34 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:18.049 08:46:34 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:18.049 08:46:34 -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:18.049 08:46:34 -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:02:18.049 08:46:34 -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:18.049 08:46:34 -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:02:18.308 08:46:34 -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:18.308 08:46:34 -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:18.308 08:46:34 -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:18.308 08:46:34 -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:02:18.308 08:46:34 -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:02:18.308 08:46:34 -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:18.308 08:46:34 -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:18.308 08:46:34 -- scripts/common.sh@15 -- # shopt -s extglob 00:02:18.308 08:46:34 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:18.308 08:46:34 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:18.308 08:46:34 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:18.308 08:46:34 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.308 08:46:34 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.308 08:46:34 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.308 08:46:34 -- paths/export.sh@5 -- # export PATH 00:02:18.308 08:46:34 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:18.308 08:46:34 -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:02:18.308 08:46:34 -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:02:18.308 08:46:34 -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:02:18.308 08:46:34 -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:02:18.308 08:46:34 -- nvmf/common.sh@50 -- # : 0 00:02:18.308 08:46:34 -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:02:18.308 08:46:34 -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:02:18.308 08:46:34 -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:02:18.308 08:46:34 -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:18.308 08:46:34 -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:18.308 08:46:34 -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:02:18.308 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:02:18.308 08:46:34 -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:02:18.308 08:46:34 -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:02:18.308 08:46:34 -- nvmf/common.sh@54 -- # have_pci_nics=0 00:02:18.308 08:46:34 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:18.308 08:46:34 -- spdk/autotest.sh@32 -- # uname -s 00:02:18.308 08:46:34 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:18.308 08:46:34 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:18.308 08:46:34 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:18.308 08:46:34 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:18.308 08:46:34 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:18.308 08:46:34 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:18.308 08:46:34 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:18.308 08:46:34 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:18.308 08:46:34 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:18.308 08:46:34 -- spdk/autotest.sh@48 -- # udevadm_pid=2124248 00:02:18.308 08:46:34 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:18.308 08:46:34 -- pm/common@17 -- # local monitor 00:02:18.308 08:46:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.308 08:46:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.308 08:46:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.308 08:46:34 -- pm/common@21 -- # date +%s 00:02:18.308 08:46:34 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:18.308 08:46:34 -- pm/common@21 -- # date +%s 00:02:18.308 08:46:34 -- pm/common@25 -- # sleep 1 00:02:18.308 08:46:34 -- pm/common@21 -- # date +%s 00:02:18.308 08:46:34 -- pm/common@21 -- # date +%s 00:02:18.308 08:46:34 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732088794 00:02:18.308 08:46:34 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732088794 00:02:18.308 08:46:34 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732088794 00:02:18.308 08:46:34 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732088794 00:02:18.308 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732088794_collect-cpu-load.pm.log 00:02:18.308 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732088794_collect-vmstat.pm.log 00:02:18.308 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732088794_collect-cpu-temp.pm.log 00:02:18.308 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732088794_collect-bmc-pm.bmc.pm.log 00:02:19.245 08:46:35 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:19.245 08:46:35 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:19.245 08:46:35 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:19.245 08:46:35 -- common/autotest_common.sh@10 -- # set +x 00:02:19.245 08:46:35 -- spdk/autotest.sh@59 -- # create_test_list 00:02:19.245 08:46:35 -- common/autotest_common.sh@752 -- # xtrace_disable 00:02:19.245 08:46:35 -- common/autotest_common.sh@10 -- # set +x 00:02:19.245 08:46:35 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:19.245 08:46:35 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:19.245 08:46:35 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:19.245 08:46:35 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:19.245 08:46:35 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:19.245 08:46:35 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:19.245 08:46:35 -- common/autotest_common.sh@1457 -- # uname 00:02:19.245 08:46:35 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:02:19.245 08:46:35 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:19.245 08:46:35 -- common/autotest_common.sh@1477 -- # uname 00:02:19.245 08:46:35 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:02:19.245 08:46:35 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:02:19.246 08:46:35 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:02:19.246 lcov: LCOV version 1.15 00:02:19.246 08:46:35 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:41.185 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:41.185 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:44.596 08:47:00 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:02:44.596 08:47:00 -- common/autotest_common.sh@726 -- # xtrace_disable 00:02:44.596 08:47:00 -- common/autotest_common.sh@10 -- # set +x 00:02:44.596 08:47:00 -- spdk/autotest.sh@78 -- # rm -f 00:02:44.596 08:47:00 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:47.134 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:47.134 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:47.134 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:47.134 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:47.134 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:47.134 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:47.134 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:47.393 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:47.393 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:47.393 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:47.393 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:47.393 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:47.393 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:47.393 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:47.393 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:47.393 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:47.393 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:47.651 08:47:03 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:02:47.651 08:47:03 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:02:47.651 08:47:03 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:02:47.651 08:47:03 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:02:47.651 08:47:03 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:02:47.651 08:47:03 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:02:47.651 08:47:03 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:02:47.651 08:47:03 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:47.651 08:47:03 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:02:47.651 08:47:03 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:02:47.651 08:47:03 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:02:47.651 08:47:03 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:02:47.651 08:47:03 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:02:47.651 08:47:03 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:02:47.651 08:47:03 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:47.651 No valid GPT data, bailing 00:02:47.651 08:47:03 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:47.651 08:47:03 -- scripts/common.sh@394 -- # pt= 00:02:47.651 08:47:03 -- scripts/common.sh@395 -- # return 1 00:02:47.651 08:47:03 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:47.651 1+0 records in 00:02:47.651 1+0 records out 00:02:47.651 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00432001 s, 243 MB/s 00:02:47.651 08:47:03 -- spdk/autotest.sh@105 -- # sync 00:02:47.651 08:47:03 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:47.651 08:47:03 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:47.651 08:47:03 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:02:54.221 08:47:09 -- spdk/autotest.sh@111 -- # uname -s 00:02:54.221 08:47:09 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:02:54.221 08:47:09 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:02:54.221 08:47:09 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:02:56.130 Hugepages 00:02:56.130 node hugesize free / total 00:02:56.130 node0 1048576kB 0 / 0 00:02:56.130 node0 2048kB 0 / 0 00:02:56.130 node1 1048576kB 0 / 0 00:02:56.130 node1 2048kB 0 / 0 00:02:56.130 00:02:56.130 Type BDF Vendor Device NUMA Driver Device Block devices 00:02:56.130 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:02:56.130 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:02:56.130 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:02:56.130 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:02:56.130 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:02:56.130 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:02:56.130 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:02:56.130 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:02:56.130 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:02:56.130 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:02:56.130 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:02:56.130 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:02:56.130 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:02:56.130 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:02:56.130 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:02:56.130 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:02:56.130 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:02:56.130 08:47:12 -- spdk/autotest.sh@117 -- # uname -s 00:02:56.130 08:47:12 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:02:56.130 08:47:12 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:02:56.130 08:47:12 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:02:59.431 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:59.431 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:59.431 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:59.431 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:59.431 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:59.431 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:59.431 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:59.431 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:02:59.431 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:02:59.431 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:02:59.431 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:02:59.431 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:02:59.431 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:02:59.431 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:02:59.431 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:02:59.431 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:00.000 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:00.000 08:47:15 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:00.938 08:47:16 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:00.938 08:47:16 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:00.938 08:47:16 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:00.938 08:47:16 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:00.938 08:47:16 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:00.938 08:47:16 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:00.938 08:47:16 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:00.938 08:47:16 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:00.938 08:47:16 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:00.938 08:47:16 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:00.938 08:47:16 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:00.938 08:47:16 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:04.229 Waiting for block devices as requested 00:03:04.229 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:04.229 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:04.229 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:04.229 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:04.229 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:04.229 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:04.229 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:04.488 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:04.488 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:04.488 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:04.747 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:04.747 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:04.747 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:05.006 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:05.006 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:05.006 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:05.006 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:05.265 08:47:21 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:05.265 08:47:21 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:05.265 08:47:21 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:05.265 08:47:21 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:05.265 08:47:21 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:05.265 08:47:21 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:05.265 08:47:21 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:05.265 08:47:21 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:05.265 08:47:21 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:05.265 08:47:21 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:05.265 08:47:21 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:05.265 08:47:21 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:05.265 08:47:21 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:05.265 08:47:21 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:05.265 08:47:21 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:05.265 08:47:21 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:05.265 08:47:21 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:05.265 08:47:21 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:05.265 08:47:21 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:05.265 08:47:21 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:05.265 08:47:21 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:05.265 08:47:21 -- common/autotest_common.sh@1543 -- # continue 00:03:05.265 08:47:21 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:05.265 08:47:21 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:05.265 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:03:05.265 08:47:21 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:05.265 08:47:21 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:05.265 08:47:21 -- common/autotest_common.sh@10 -- # set +x 00:03:05.265 08:47:21 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:08.558 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:08.558 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:08.558 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:08.558 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:08.558 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:08.558 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:08.558 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:08.558 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:08.558 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:08.558 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:08.558 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:08.558 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:08.558 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:08.558 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:08.558 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:08.558 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:09.127 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:09.127 08:47:25 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:03:09.127 08:47:25 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:09.127 08:47:25 -- common/autotest_common.sh@10 -- # set +x 00:03:09.127 08:47:25 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:03:09.127 08:47:25 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:03:09.127 08:47:25 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:03:09.127 08:47:25 -- common/autotest_common.sh@1563 -- # bdfs=() 00:03:09.127 08:47:25 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:03:09.127 08:47:25 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:03:09.127 08:47:25 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:03:09.127 08:47:25 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:03:09.127 08:47:25 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:09.127 08:47:25 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:09.127 08:47:25 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:09.127 08:47:25 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:09.127 08:47:25 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:09.388 08:47:25 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:09.388 08:47:25 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:09.388 08:47:25 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:03:09.388 08:47:25 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:03:09.388 08:47:25 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:03:09.388 08:47:25 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:09.388 08:47:25 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:03:09.388 08:47:25 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:03:09.388 08:47:25 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:03:09.388 08:47:25 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:03:09.388 08:47:25 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=2138683 00:03:09.388 08:47:25 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:09.388 08:47:25 -- common/autotest_common.sh@1585 -- # waitforlisten 2138683 00:03:09.388 08:47:25 -- common/autotest_common.sh@835 -- # '[' -z 2138683 ']' 00:03:09.388 08:47:25 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:09.388 08:47:25 -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:09.388 08:47:25 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:09.388 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:09.388 08:47:25 -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:09.388 08:47:25 -- common/autotest_common.sh@10 -- # set +x 00:03:09.388 [2024-11-20 08:47:25.274211] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:03:09.388 [2024-11-20 08:47:25.274257] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2138683 ] 00:03:09.388 [2024-11-20 08:47:25.350540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:09.388 [2024-11-20 08:47:25.393113] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:09.647 08:47:25 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:09.647 08:47:25 -- common/autotest_common.sh@868 -- # return 0 00:03:09.647 08:47:25 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:03:09.647 08:47:25 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:03:09.647 08:47:25 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:03:12.934 nvme0n1 00:03:12.934 08:47:28 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:12.934 [2024-11-20 08:47:28.805787] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:03:12.934 request: 00:03:12.934 { 00:03:12.934 "nvme_ctrlr_name": "nvme0", 00:03:12.934 "password": "test", 00:03:12.934 "method": "bdev_nvme_opal_revert", 00:03:12.934 "req_id": 1 00:03:12.934 } 00:03:12.934 Got JSON-RPC error response 00:03:12.934 response: 00:03:12.934 { 00:03:12.934 "code": -32602, 00:03:12.934 "message": "Invalid parameters" 00:03:12.934 } 00:03:12.934 08:47:28 -- common/autotest_common.sh@1591 -- # true 00:03:12.934 08:47:28 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:03:12.934 08:47:28 -- common/autotest_common.sh@1595 -- # killprocess 2138683 00:03:12.934 08:47:28 -- common/autotest_common.sh@954 -- # '[' -z 2138683 ']' 00:03:12.934 08:47:28 -- common/autotest_common.sh@958 -- # kill -0 2138683 00:03:12.934 08:47:28 -- common/autotest_common.sh@959 -- # uname 00:03:12.934 08:47:28 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:12.934 08:47:28 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2138683 00:03:12.934 08:47:28 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:12.935 08:47:28 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:12.935 08:47:28 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2138683' 00:03:12.935 killing process with pid 2138683 00:03:12.935 08:47:28 -- common/autotest_common.sh@973 -- # kill 2138683 00:03:12.935 08:47:28 -- common/autotest_common.sh@978 -- # wait 2138683 00:03:14.838 08:47:30 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:03:14.838 08:47:30 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:03:14.838 08:47:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:14.838 08:47:30 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:03:14.838 08:47:30 -- spdk/autotest.sh@149 -- # timing_enter lib 00:03:14.838 08:47:30 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:14.838 08:47:30 -- common/autotest_common.sh@10 -- # set +x 00:03:14.838 08:47:30 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:03:14.838 08:47:30 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:14.838 08:47:30 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:14.838 08:47:30 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:14.838 08:47:30 -- common/autotest_common.sh@10 -- # set +x 00:03:14.838 ************************************ 00:03:14.838 START TEST env 00:03:14.838 ************************************ 00:03:14.838 08:47:30 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:03:14.838 * Looking for test storage... 00:03:14.838 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:03:14.838 08:47:30 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:14.838 08:47:30 env -- common/autotest_common.sh@1693 -- # lcov --version 00:03:14.838 08:47:30 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:14.838 08:47:30 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:14.838 08:47:30 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:14.838 08:47:30 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:14.838 08:47:30 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:14.838 08:47:30 env -- scripts/common.sh@336 -- # IFS=.-: 00:03:14.838 08:47:30 env -- scripts/common.sh@336 -- # read -ra ver1 00:03:14.838 08:47:30 env -- scripts/common.sh@337 -- # IFS=.-: 00:03:14.838 08:47:30 env -- scripts/common.sh@337 -- # read -ra ver2 00:03:14.838 08:47:30 env -- scripts/common.sh@338 -- # local 'op=<' 00:03:14.838 08:47:30 env -- scripts/common.sh@340 -- # ver1_l=2 00:03:14.838 08:47:30 env -- scripts/common.sh@341 -- # ver2_l=1 00:03:14.838 08:47:30 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:14.838 08:47:30 env -- scripts/common.sh@344 -- # case "$op" in 00:03:14.838 08:47:30 env -- scripts/common.sh@345 -- # : 1 00:03:14.838 08:47:30 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:14.838 08:47:30 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:14.838 08:47:30 env -- scripts/common.sh@365 -- # decimal 1 00:03:14.838 08:47:30 env -- scripts/common.sh@353 -- # local d=1 00:03:14.838 08:47:30 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:14.838 08:47:30 env -- scripts/common.sh@355 -- # echo 1 00:03:14.838 08:47:30 env -- scripts/common.sh@365 -- # ver1[v]=1 00:03:14.838 08:47:30 env -- scripts/common.sh@366 -- # decimal 2 00:03:14.838 08:47:30 env -- scripts/common.sh@353 -- # local d=2 00:03:14.838 08:47:30 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:14.838 08:47:30 env -- scripts/common.sh@355 -- # echo 2 00:03:14.838 08:47:30 env -- scripts/common.sh@366 -- # ver2[v]=2 00:03:14.838 08:47:30 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:14.838 08:47:30 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:14.838 08:47:30 env -- scripts/common.sh@368 -- # return 0 00:03:14.838 08:47:30 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:14.838 08:47:30 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:14.838 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:14.838 --rc genhtml_branch_coverage=1 00:03:14.839 --rc genhtml_function_coverage=1 00:03:14.839 --rc genhtml_legend=1 00:03:14.839 --rc geninfo_all_blocks=1 00:03:14.839 --rc geninfo_unexecuted_blocks=1 00:03:14.839 00:03:14.839 ' 00:03:14.839 08:47:30 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:14.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:14.839 --rc genhtml_branch_coverage=1 00:03:14.839 --rc genhtml_function_coverage=1 00:03:14.839 --rc genhtml_legend=1 00:03:14.839 --rc geninfo_all_blocks=1 00:03:14.839 --rc geninfo_unexecuted_blocks=1 00:03:14.839 00:03:14.839 ' 00:03:14.839 08:47:30 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:14.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:14.839 --rc genhtml_branch_coverage=1 00:03:14.839 --rc genhtml_function_coverage=1 00:03:14.839 --rc genhtml_legend=1 00:03:14.839 --rc geninfo_all_blocks=1 00:03:14.839 --rc geninfo_unexecuted_blocks=1 00:03:14.839 00:03:14.839 ' 00:03:14.839 08:47:30 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:14.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:14.839 --rc genhtml_branch_coverage=1 00:03:14.839 --rc genhtml_function_coverage=1 00:03:14.839 --rc genhtml_legend=1 00:03:14.839 --rc geninfo_all_blocks=1 00:03:14.839 --rc geninfo_unexecuted_blocks=1 00:03:14.839 00:03:14.839 ' 00:03:14.839 08:47:30 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:14.839 08:47:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:14.839 08:47:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:14.839 08:47:30 env -- common/autotest_common.sh@10 -- # set +x 00:03:14.839 ************************************ 00:03:14.839 START TEST env_memory 00:03:14.839 ************************************ 00:03:14.839 08:47:30 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:03:14.839 00:03:14.839 00:03:14.839 CUnit - A unit testing framework for C - Version 2.1-3 00:03:14.839 http://cunit.sourceforge.net/ 00:03:14.839 00:03:14.839 00:03:14.839 Suite: memory 00:03:14.839 Test: alloc and free memory map ...[2024-11-20 08:47:30.762311] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:03:14.839 passed 00:03:14.839 Test: mem map translation ...[2024-11-20 08:47:30.781939] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:03:14.839 [2024-11-20 08:47:30.781962] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:03:14.839 [2024-11-20 08:47:30.781998] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:03:14.839 [2024-11-20 08:47:30.782005] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:03:14.839 passed 00:03:14.839 Test: mem map registration ...[2024-11-20 08:47:30.820997] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:03:14.839 [2024-11-20 08:47:30.821012] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:03:14.839 passed 00:03:14.839 Test: mem map adjacent registrations ...passed 00:03:14.839 00:03:14.839 Run Summary: Type Total Ran Passed Failed Inactive 00:03:14.839 suites 1 1 n/a 0 0 00:03:14.839 tests 4 4 4 0 0 00:03:14.839 asserts 152 152 152 0 n/a 00:03:14.839 00:03:14.839 Elapsed time = 0.139 seconds 00:03:14.839 00:03:14.839 real 0m0.152s 00:03:14.839 user 0m0.141s 00:03:14.839 sys 0m0.010s 00:03:14.839 08:47:30 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:14.839 08:47:30 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:03:14.839 ************************************ 00:03:14.839 END TEST env_memory 00:03:14.839 ************************************ 00:03:15.099 08:47:30 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:15.099 08:47:30 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:15.099 08:47:30 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:15.099 08:47:30 env -- common/autotest_common.sh@10 -- # set +x 00:03:15.099 ************************************ 00:03:15.099 START TEST env_vtophys 00:03:15.099 ************************************ 00:03:15.099 08:47:30 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:03:15.099 EAL: lib.eal log level changed from notice to debug 00:03:15.099 EAL: Detected lcore 0 as core 0 on socket 0 00:03:15.099 EAL: Detected lcore 1 as core 1 on socket 0 00:03:15.099 EAL: Detected lcore 2 as core 2 on socket 0 00:03:15.099 EAL: Detected lcore 3 as core 3 on socket 0 00:03:15.099 EAL: Detected lcore 4 as core 4 on socket 0 00:03:15.099 EAL: Detected lcore 5 as core 5 on socket 0 00:03:15.099 EAL: Detected lcore 6 as core 6 on socket 0 00:03:15.099 EAL: Detected lcore 7 as core 8 on socket 0 00:03:15.099 EAL: Detected lcore 8 as core 9 on socket 0 00:03:15.099 EAL: Detected lcore 9 as core 10 on socket 0 00:03:15.099 EAL: Detected lcore 10 as core 11 on socket 0 00:03:15.099 EAL: Detected lcore 11 as core 12 on socket 0 00:03:15.099 EAL: Detected lcore 12 as core 13 on socket 0 00:03:15.099 EAL: Detected lcore 13 as core 16 on socket 0 00:03:15.099 EAL: Detected lcore 14 as core 17 on socket 0 00:03:15.099 EAL: Detected lcore 15 as core 18 on socket 0 00:03:15.099 EAL: Detected lcore 16 as core 19 on socket 0 00:03:15.099 EAL: Detected lcore 17 as core 20 on socket 0 00:03:15.099 EAL: Detected lcore 18 as core 21 on socket 0 00:03:15.099 EAL: Detected lcore 19 as core 25 on socket 0 00:03:15.099 EAL: Detected lcore 20 as core 26 on socket 0 00:03:15.099 EAL: Detected lcore 21 as core 27 on socket 0 00:03:15.099 EAL: Detected lcore 22 as core 28 on socket 0 00:03:15.099 EAL: Detected lcore 23 as core 29 on socket 0 00:03:15.099 EAL: Detected lcore 24 as core 0 on socket 1 00:03:15.099 EAL: Detected lcore 25 as core 1 on socket 1 00:03:15.099 EAL: Detected lcore 26 as core 2 on socket 1 00:03:15.099 EAL: Detected lcore 27 as core 3 on socket 1 00:03:15.099 EAL: Detected lcore 28 as core 4 on socket 1 00:03:15.099 EAL: Detected lcore 29 as core 5 on socket 1 00:03:15.099 EAL: Detected lcore 30 as core 6 on socket 1 00:03:15.099 EAL: Detected lcore 31 as core 9 on socket 1 00:03:15.099 EAL: Detected lcore 32 as core 10 on socket 1 00:03:15.099 EAL: Detected lcore 33 as core 11 on socket 1 00:03:15.099 EAL: Detected lcore 34 as core 12 on socket 1 00:03:15.099 EAL: Detected lcore 35 as core 13 on socket 1 00:03:15.099 EAL: Detected lcore 36 as core 16 on socket 1 00:03:15.099 EAL: Detected lcore 37 as core 17 on socket 1 00:03:15.099 EAL: Detected lcore 38 as core 18 on socket 1 00:03:15.099 EAL: Detected lcore 39 as core 19 on socket 1 00:03:15.099 EAL: Detected lcore 40 as core 20 on socket 1 00:03:15.099 EAL: Detected lcore 41 as core 21 on socket 1 00:03:15.099 EAL: Detected lcore 42 as core 24 on socket 1 00:03:15.099 EAL: Detected lcore 43 as core 25 on socket 1 00:03:15.099 EAL: Detected lcore 44 as core 26 on socket 1 00:03:15.099 EAL: Detected lcore 45 as core 27 on socket 1 00:03:15.099 EAL: Detected lcore 46 as core 28 on socket 1 00:03:15.099 EAL: Detected lcore 47 as core 29 on socket 1 00:03:15.099 EAL: Detected lcore 48 as core 0 on socket 0 00:03:15.099 EAL: Detected lcore 49 as core 1 on socket 0 00:03:15.099 EAL: Detected lcore 50 as core 2 on socket 0 00:03:15.099 EAL: Detected lcore 51 as core 3 on socket 0 00:03:15.099 EAL: Detected lcore 52 as core 4 on socket 0 00:03:15.099 EAL: Detected lcore 53 as core 5 on socket 0 00:03:15.099 EAL: Detected lcore 54 as core 6 on socket 0 00:03:15.099 EAL: Detected lcore 55 as core 8 on socket 0 00:03:15.099 EAL: Detected lcore 56 as core 9 on socket 0 00:03:15.099 EAL: Detected lcore 57 as core 10 on socket 0 00:03:15.099 EAL: Detected lcore 58 as core 11 on socket 0 00:03:15.099 EAL: Detected lcore 59 as core 12 on socket 0 00:03:15.099 EAL: Detected lcore 60 as core 13 on socket 0 00:03:15.099 EAL: Detected lcore 61 as core 16 on socket 0 00:03:15.099 EAL: Detected lcore 62 as core 17 on socket 0 00:03:15.099 EAL: Detected lcore 63 as core 18 on socket 0 00:03:15.099 EAL: Detected lcore 64 as core 19 on socket 0 00:03:15.099 EAL: Detected lcore 65 as core 20 on socket 0 00:03:15.099 EAL: Detected lcore 66 as core 21 on socket 0 00:03:15.099 EAL: Detected lcore 67 as core 25 on socket 0 00:03:15.099 EAL: Detected lcore 68 as core 26 on socket 0 00:03:15.099 EAL: Detected lcore 69 as core 27 on socket 0 00:03:15.099 EAL: Detected lcore 70 as core 28 on socket 0 00:03:15.099 EAL: Detected lcore 71 as core 29 on socket 0 00:03:15.099 EAL: Detected lcore 72 as core 0 on socket 1 00:03:15.099 EAL: Detected lcore 73 as core 1 on socket 1 00:03:15.099 EAL: Detected lcore 74 as core 2 on socket 1 00:03:15.099 EAL: Detected lcore 75 as core 3 on socket 1 00:03:15.099 EAL: Detected lcore 76 as core 4 on socket 1 00:03:15.099 EAL: Detected lcore 77 as core 5 on socket 1 00:03:15.099 EAL: Detected lcore 78 as core 6 on socket 1 00:03:15.099 EAL: Detected lcore 79 as core 9 on socket 1 00:03:15.099 EAL: Detected lcore 80 as core 10 on socket 1 00:03:15.099 EAL: Detected lcore 81 as core 11 on socket 1 00:03:15.099 EAL: Detected lcore 82 as core 12 on socket 1 00:03:15.099 EAL: Detected lcore 83 as core 13 on socket 1 00:03:15.099 EAL: Detected lcore 84 as core 16 on socket 1 00:03:15.099 EAL: Detected lcore 85 as core 17 on socket 1 00:03:15.099 EAL: Detected lcore 86 as core 18 on socket 1 00:03:15.099 EAL: Detected lcore 87 as core 19 on socket 1 00:03:15.099 EAL: Detected lcore 88 as core 20 on socket 1 00:03:15.099 EAL: Detected lcore 89 as core 21 on socket 1 00:03:15.099 EAL: Detected lcore 90 as core 24 on socket 1 00:03:15.099 EAL: Detected lcore 91 as core 25 on socket 1 00:03:15.099 EAL: Detected lcore 92 as core 26 on socket 1 00:03:15.099 EAL: Detected lcore 93 as core 27 on socket 1 00:03:15.099 EAL: Detected lcore 94 as core 28 on socket 1 00:03:15.099 EAL: Detected lcore 95 as core 29 on socket 1 00:03:15.099 EAL: Maximum logical cores by configuration: 128 00:03:15.099 EAL: Detected CPU lcores: 96 00:03:15.099 EAL: Detected NUMA nodes: 2 00:03:15.099 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:03:15.099 EAL: Detected shared linkage of DPDK 00:03:15.099 EAL: No shared files mode enabled, IPC will be disabled 00:03:15.099 EAL: Bus pci wants IOVA as 'DC' 00:03:15.099 EAL: Buses did not request a specific IOVA mode. 00:03:15.100 EAL: IOMMU is available, selecting IOVA as VA mode. 00:03:15.100 EAL: Selected IOVA mode 'VA' 00:03:15.100 EAL: Probing VFIO support... 00:03:15.100 EAL: IOMMU type 1 (Type 1) is supported 00:03:15.100 EAL: IOMMU type 7 (sPAPR) is not supported 00:03:15.100 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:03:15.100 EAL: VFIO support initialized 00:03:15.100 EAL: Ask a virtual area of 0x2e000 bytes 00:03:15.100 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:03:15.100 EAL: Setting up physically contiguous memory... 00:03:15.100 EAL: Setting maximum number of open files to 524288 00:03:15.100 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:03:15.100 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:03:15.100 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:03:15.100 EAL: Ask a virtual area of 0x61000 bytes 00:03:15.100 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:03:15.100 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:15.100 EAL: Ask a virtual area of 0x400000000 bytes 00:03:15.100 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:03:15.100 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:03:15.100 EAL: Ask a virtual area of 0x61000 bytes 00:03:15.100 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:03:15.100 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:15.100 EAL: Ask a virtual area of 0x400000000 bytes 00:03:15.100 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:03:15.100 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:03:15.100 EAL: Ask a virtual area of 0x61000 bytes 00:03:15.100 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:03:15.100 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:15.100 EAL: Ask a virtual area of 0x400000000 bytes 00:03:15.100 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:03:15.100 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:03:15.100 EAL: Ask a virtual area of 0x61000 bytes 00:03:15.100 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:03:15.100 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:03:15.100 EAL: Ask a virtual area of 0x400000000 bytes 00:03:15.100 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:03:15.100 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:03:15.100 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:03:15.100 EAL: Ask a virtual area of 0x61000 bytes 00:03:15.100 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:03:15.100 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:15.100 EAL: Ask a virtual area of 0x400000000 bytes 00:03:15.100 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:03:15.100 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:03:15.100 EAL: Ask a virtual area of 0x61000 bytes 00:03:15.100 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:03:15.100 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:15.100 EAL: Ask a virtual area of 0x400000000 bytes 00:03:15.100 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:03:15.100 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:03:15.100 EAL: Ask a virtual area of 0x61000 bytes 00:03:15.100 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:03:15.100 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:15.100 EAL: Ask a virtual area of 0x400000000 bytes 00:03:15.100 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:03:15.100 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:03:15.100 EAL: Ask a virtual area of 0x61000 bytes 00:03:15.100 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:03:15.100 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:03:15.100 EAL: Ask a virtual area of 0x400000000 bytes 00:03:15.100 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:03:15.100 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:03:15.100 EAL: Hugepages will be freed exactly as allocated. 00:03:15.100 EAL: No shared files mode enabled, IPC is disabled 00:03:15.100 EAL: No shared files mode enabled, IPC is disabled 00:03:15.100 EAL: TSC frequency is ~2300000 KHz 00:03:15.100 EAL: Main lcore 0 is ready (tid=7fd662373a00;cpuset=[0]) 00:03:15.100 EAL: Trying to obtain current memory policy. 00:03:15.100 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.100 EAL: Restoring previous memory policy: 0 00:03:15.100 EAL: request: mp_malloc_sync 00:03:15.100 EAL: No shared files mode enabled, IPC is disabled 00:03:15.100 EAL: Heap on socket 0 was expanded by 2MB 00:03:15.100 EAL: No shared files mode enabled, IPC is disabled 00:03:15.100 EAL: No PCI address specified using 'addr=' in: bus=pci 00:03:15.100 EAL: Mem event callback 'spdk:(nil)' registered 00:03:15.100 00:03:15.100 00:03:15.100 CUnit - A unit testing framework for C - Version 2.1-3 00:03:15.100 http://cunit.sourceforge.net/ 00:03:15.100 00:03:15.100 00:03:15.100 Suite: components_suite 00:03:15.100 Test: vtophys_malloc_test ...passed 00:03:15.100 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:03:15.100 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.100 EAL: Restoring previous memory policy: 4 00:03:15.100 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.100 EAL: request: mp_malloc_sync 00:03:15.100 EAL: No shared files mode enabled, IPC is disabled 00:03:15.100 EAL: Heap on socket 0 was expanded by 4MB 00:03:15.100 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.100 EAL: request: mp_malloc_sync 00:03:15.100 EAL: No shared files mode enabled, IPC is disabled 00:03:15.100 EAL: Heap on socket 0 was shrunk by 4MB 00:03:15.100 EAL: Trying to obtain current memory policy. 00:03:15.100 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.100 EAL: Restoring previous memory policy: 4 00:03:15.100 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.100 EAL: request: mp_malloc_sync 00:03:15.100 EAL: No shared files mode enabled, IPC is disabled 00:03:15.100 EAL: Heap on socket 0 was expanded by 6MB 00:03:15.100 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.100 EAL: request: mp_malloc_sync 00:03:15.100 EAL: No shared files mode enabled, IPC is disabled 00:03:15.100 EAL: Heap on socket 0 was shrunk by 6MB 00:03:15.100 EAL: Trying to obtain current memory policy. 00:03:15.100 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.100 EAL: Restoring previous memory policy: 4 00:03:15.100 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.100 EAL: request: mp_malloc_sync 00:03:15.100 EAL: No shared files mode enabled, IPC is disabled 00:03:15.100 EAL: Heap on socket 0 was expanded by 10MB 00:03:15.100 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.100 EAL: request: mp_malloc_sync 00:03:15.100 EAL: No shared files mode enabled, IPC is disabled 00:03:15.100 EAL: Heap on socket 0 was shrunk by 10MB 00:03:15.100 EAL: Trying to obtain current memory policy. 00:03:15.100 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.100 EAL: Restoring previous memory policy: 4 00:03:15.100 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.100 EAL: request: mp_malloc_sync 00:03:15.100 EAL: No shared files mode enabled, IPC is disabled 00:03:15.100 EAL: Heap on socket 0 was expanded by 18MB 00:03:15.100 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.100 EAL: request: mp_malloc_sync 00:03:15.100 EAL: No shared files mode enabled, IPC is disabled 00:03:15.100 EAL: Heap on socket 0 was shrunk by 18MB 00:03:15.100 EAL: Trying to obtain current memory policy. 00:03:15.100 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.100 EAL: Restoring previous memory policy: 4 00:03:15.100 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.100 EAL: request: mp_malloc_sync 00:03:15.100 EAL: No shared files mode enabled, IPC is disabled 00:03:15.100 EAL: Heap on socket 0 was expanded by 34MB 00:03:15.100 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.100 EAL: request: mp_malloc_sync 00:03:15.100 EAL: No shared files mode enabled, IPC is disabled 00:03:15.100 EAL: Heap on socket 0 was shrunk by 34MB 00:03:15.100 EAL: Trying to obtain current memory policy. 00:03:15.100 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.100 EAL: Restoring previous memory policy: 4 00:03:15.100 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.100 EAL: request: mp_malloc_sync 00:03:15.100 EAL: No shared files mode enabled, IPC is disabled 00:03:15.100 EAL: Heap on socket 0 was expanded by 66MB 00:03:15.100 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.100 EAL: request: mp_malloc_sync 00:03:15.100 EAL: No shared files mode enabled, IPC is disabled 00:03:15.100 EAL: Heap on socket 0 was shrunk by 66MB 00:03:15.100 EAL: Trying to obtain current memory policy. 00:03:15.100 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.100 EAL: Restoring previous memory policy: 4 00:03:15.100 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.100 EAL: request: mp_malloc_sync 00:03:15.100 EAL: No shared files mode enabled, IPC is disabled 00:03:15.100 EAL: Heap on socket 0 was expanded by 130MB 00:03:15.100 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.360 EAL: request: mp_malloc_sync 00:03:15.360 EAL: No shared files mode enabled, IPC is disabled 00:03:15.360 EAL: Heap on socket 0 was shrunk by 130MB 00:03:15.360 EAL: Trying to obtain current memory policy. 00:03:15.360 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.360 EAL: Restoring previous memory policy: 4 00:03:15.360 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.360 EAL: request: mp_malloc_sync 00:03:15.360 EAL: No shared files mode enabled, IPC is disabled 00:03:15.360 EAL: Heap on socket 0 was expanded by 258MB 00:03:15.360 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.360 EAL: request: mp_malloc_sync 00:03:15.360 EAL: No shared files mode enabled, IPC is disabled 00:03:15.360 EAL: Heap on socket 0 was shrunk by 258MB 00:03:15.360 EAL: Trying to obtain current memory policy. 00:03:15.360 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.360 EAL: Restoring previous memory policy: 4 00:03:15.360 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.360 EAL: request: mp_malloc_sync 00:03:15.360 EAL: No shared files mode enabled, IPC is disabled 00:03:15.360 EAL: Heap on socket 0 was expanded by 514MB 00:03:15.618 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.618 EAL: request: mp_malloc_sync 00:03:15.618 EAL: No shared files mode enabled, IPC is disabled 00:03:15.618 EAL: Heap on socket 0 was shrunk by 514MB 00:03:15.618 EAL: Trying to obtain current memory policy. 00:03:15.618 EAL: Setting policy MPOL_PREFERRED for socket 0 00:03:15.878 EAL: Restoring previous memory policy: 4 00:03:15.878 EAL: Calling mem event callback 'spdk:(nil)' 00:03:15.878 EAL: request: mp_malloc_sync 00:03:15.878 EAL: No shared files mode enabled, IPC is disabled 00:03:15.878 EAL: Heap on socket 0 was expanded by 1026MB 00:03:15.878 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.138 EAL: request: mp_malloc_sync 00:03:16.138 EAL: No shared files mode enabled, IPC is disabled 00:03:16.138 EAL: Heap on socket 0 was shrunk by 1026MB 00:03:16.138 passed 00:03:16.138 00:03:16.138 Run Summary: Type Total Ran Passed Failed Inactive 00:03:16.138 suites 1 1 n/a 0 0 00:03:16.138 tests 2 2 2 0 0 00:03:16.138 asserts 497 497 497 0 n/a 00:03:16.138 00:03:16.138 Elapsed time = 0.978 seconds 00:03:16.138 EAL: Calling mem event callback 'spdk:(nil)' 00:03:16.138 EAL: request: mp_malloc_sync 00:03:16.138 EAL: No shared files mode enabled, IPC is disabled 00:03:16.138 EAL: Heap on socket 0 was shrunk by 2MB 00:03:16.138 EAL: No shared files mode enabled, IPC is disabled 00:03:16.138 EAL: No shared files mode enabled, IPC is disabled 00:03:16.138 EAL: No shared files mode enabled, IPC is disabled 00:03:16.138 00:03:16.138 real 0m1.113s 00:03:16.138 user 0m0.655s 00:03:16.138 sys 0m0.430s 00:03:16.138 08:47:32 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:16.138 08:47:32 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:03:16.138 ************************************ 00:03:16.138 END TEST env_vtophys 00:03:16.138 ************************************ 00:03:16.138 08:47:32 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:16.138 08:47:32 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:16.138 08:47:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:16.138 08:47:32 env -- common/autotest_common.sh@10 -- # set +x 00:03:16.138 ************************************ 00:03:16.138 START TEST env_pci 00:03:16.138 ************************************ 00:03:16.138 08:47:32 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:03:16.138 00:03:16.138 00:03:16.138 CUnit - A unit testing framework for C - Version 2.1-3 00:03:16.138 http://cunit.sourceforge.net/ 00:03:16.138 00:03:16.138 00:03:16.138 Suite: pci 00:03:16.138 Test: pci_hook ...[2024-11-20 08:47:32.144055] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2139943 has claimed it 00:03:16.138 EAL: Cannot find device (10000:00:01.0) 00:03:16.138 EAL: Failed to attach device on primary process 00:03:16.138 passed 00:03:16.138 00:03:16.138 Run Summary: Type Total Ran Passed Failed Inactive 00:03:16.138 suites 1 1 n/a 0 0 00:03:16.138 tests 1 1 1 0 0 00:03:16.138 asserts 25 25 25 0 n/a 00:03:16.138 00:03:16.138 Elapsed time = 0.027 seconds 00:03:16.138 00:03:16.138 real 0m0.046s 00:03:16.138 user 0m0.016s 00:03:16.138 sys 0m0.030s 00:03:16.138 08:47:32 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:16.138 08:47:32 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:03:16.138 ************************************ 00:03:16.138 END TEST env_pci 00:03:16.138 ************************************ 00:03:16.398 08:47:32 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:03:16.398 08:47:32 env -- env/env.sh@15 -- # uname 00:03:16.398 08:47:32 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:03:16.398 08:47:32 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:03:16.398 08:47:32 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:16.398 08:47:32 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:03:16.398 08:47:32 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:16.398 08:47:32 env -- common/autotest_common.sh@10 -- # set +x 00:03:16.398 ************************************ 00:03:16.398 START TEST env_dpdk_post_init 00:03:16.398 ************************************ 00:03:16.398 08:47:32 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:03:16.398 EAL: Detected CPU lcores: 96 00:03:16.398 EAL: Detected NUMA nodes: 2 00:03:16.398 EAL: Detected shared linkage of DPDK 00:03:16.398 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:16.398 EAL: Selected IOVA mode 'VA' 00:03:16.398 EAL: VFIO support initialized 00:03:16.398 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:16.398 EAL: Using IOMMU type 1 (Type 1) 00:03:16.398 EAL: Ignore mapping IO port bar(1) 00:03:16.398 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:03:16.398 EAL: Ignore mapping IO port bar(1) 00:03:16.398 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:03:16.398 EAL: Ignore mapping IO port bar(1) 00:03:16.398 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:03:16.398 EAL: Ignore mapping IO port bar(1) 00:03:16.398 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:03:16.658 EAL: Ignore mapping IO port bar(1) 00:03:16.658 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:03:16.658 EAL: Ignore mapping IO port bar(1) 00:03:16.658 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:03:16.658 EAL: Ignore mapping IO port bar(1) 00:03:16.658 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:03:16.658 EAL: Ignore mapping IO port bar(1) 00:03:16.658 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:03:17.228 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:03:17.228 EAL: Ignore mapping IO port bar(1) 00:03:17.228 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:03:17.228 EAL: Ignore mapping IO port bar(1) 00:03:17.228 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:03:17.228 EAL: Ignore mapping IO port bar(1) 00:03:17.228 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:03:17.228 EAL: Ignore mapping IO port bar(1) 00:03:17.228 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:03:17.486 EAL: Ignore mapping IO port bar(1) 00:03:17.486 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:03:17.486 EAL: Ignore mapping IO port bar(1) 00:03:17.487 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:03:17.487 EAL: Ignore mapping IO port bar(1) 00:03:17.487 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:03:17.487 EAL: Ignore mapping IO port bar(1) 00:03:17.487 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:03:20.775 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:03:20.775 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:03:20.775 Starting DPDK initialization... 00:03:20.775 Starting SPDK post initialization... 00:03:20.775 SPDK NVMe probe 00:03:20.775 Attaching to 0000:5e:00.0 00:03:20.775 Attached to 0000:5e:00.0 00:03:20.775 Cleaning up... 00:03:20.775 00:03:20.775 real 0m4.335s 00:03:20.775 user 0m2.982s 00:03:20.775 sys 0m0.426s 00:03:20.775 08:47:36 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:20.775 08:47:36 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:03:20.775 ************************************ 00:03:20.775 END TEST env_dpdk_post_init 00:03:20.775 ************************************ 00:03:20.775 08:47:36 env -- env/env.sh@26 -- # uname 00:03:20.775 08:47:36 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:03:20.775 08:47:36 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:20.775 08:47:36 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:20.775 08:47:36 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:20.775 08:47:36 env -- common/autotest_common.sh@10 -- # set +x 00:03:20.775 ************************************ 00:03:20.775 START TEST env_mem_callbacks 00:03:20.775 ************************************ 00:03:20.775 08:47:36 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:03:20.775 EAL: Detected CPU lcores: 96 00:03:20.775 EAL: Detected NUMA nodes: 2 00:03:20.775 EAL: Detected shared linkage of DPDK 00:03:20.775 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:03:20.775 EAL: Selected IOVA mode 'VA' 00:03:20.775 EAL: VFIO support initialized 00:03:20.775 TELEMETRY: No legacy callbacks, legacy socket not created 00:03:20.775 00:03:20.775 00:03:20.775 CUnit - A unit testing framework for C - Version 2.1-3 00:03:20.775 http://cunit.sourceforge.net/ 00:03:20.775 00:03:20.775 00:03:20.775 Suite: memory 00:03:20.775 Test: test ... 00:03:20.775 register 0x200000200000 2097152 00:03:20.775 malloc 3145728 00:03:20.775 register 0x200000400000 4194304 00:03:20.775 buf 0x200000500000 len 3145728 PASSED 00:03:20.775 malloc 64 00:03:20.775 buf 0x2000004fff40 len 64 PASSED 00:03:20.775 malloc 4194304 00:03:20.775 register 0x200000800000 6291456 00:03:20.775 buf 0x200000a00000 len 4194304 PASSED 00:03:20.775 free 0x200000500000 3145728 00:03:20.775 free 0x2000004fff40 64 00:03:20.775 unregister 0x200000400000 4194304 PASSED 00:03:20.775 free 0x200000a00000 4194304 00:03:20.775 unregister 0x200000800000 6291456 PASSED 00:03:20.775 malloc 8388608 00:03:20.775 register 0x200000400000 10485760 00:03:20.775 buf 0x200000600000 len 8388608 PASSED 00:03:20.775 free 0x200000600000 8388608 00:03:20.775 unregister 0x200000400000 10485760 PASSED 00:03:20.775 passed 00:03:20.775 00:03:20.775 Run Summary: Type Total Ran Passed Failed Inactive 00:03:20.775 suites 1 1 n/a 0 0 00:03:20.775 tests 1 1 1 0 0 00:03:20.775 asserts 15 15 15 0 n/a 00:03:20.775 00:03:20.775 Elapsed time = 0.008 seconds 00:03:20.775 00:03:20.775 real 0m0.055s 00:03:20.775 user 0m0.016s 00:03:20.775 sys 0m0.039s 00:03:20.775 08:47:36 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:20.775 08:47:36 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:03:20.775 ************************************ 00:03:20.775 END TEST env_mem_callbacks 00:03:20.775 ************************************ 00:03:20.775 00:03:20.775 real 0m6.249s 00:03:20.775 user 0m4.050s 00:03:20.775 sys 0m1.278s 00:03:20.775 08:47:36 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:20.775 08:47:36 env -- common/autotest_common.sh@10 -- # set +x 00:03:20.775 ************************************ 00:03:20.775 END TEST env 00:03:20.775 ************************************ 00:03:20.775 08:47:36 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:20.775 08:47:36 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:20.775 08:47:36 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:20.775 08:47:36 -- common/autotest_common.sh@10 -- # set +x 00:03:21.036 ************************************ 00:03:21.036 START TEST rpc 00:03:21.036 ************************************ 00:03:21.036 08:47:36 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:03:21.036 * Looking for test storage... 00:03:21.036 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:21.036 08:47:36 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:21.036 08:47:36 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:21.036 08:47:36 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:21.036 08:47:36 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:21.036 08:47:36 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:21.036 08:47:36 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:21.036 08:47:36 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:21.036 08:47:36 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:21.036 08:47:36 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:21.036 08:47:36 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:21.036 08:47:36 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:21.036 08:47:36 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:21.036 08:47:36 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:21.036 08:47:36 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:21.036 08:47:36 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:21.036 08:47:36 rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:21.036 08:47:36 rpc -- scripts/common.sh@345 -- # : 1 00:03:21.036 08:47:36 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:21.036 08:47:36 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:21.036 08:47:36 rpc -- scripts/common.sh@365 -- # decimal 1 00:03:21.036 08:47:36 rpc -- scripts/common.sh@353 -- # local d=1 00:03:21.036 08:47:36 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:21.036 08:47:36 rpc -- scripts/common.sh@355 -- # echo 1 00:03:21.036 08:47:36 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:21.036 08:47:36 rpc -- scripts/common.sh@366 -- # decimal 2 00:03:21.036 08:47:36 rpc -- scripts/common.sh@353 -- # local d=2 00:03:21.036 08:47:36 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:21.036 08:47:36 rpc -- scripts/common.sh@355 -- # echo 2 00:03:21.036 08:47:36 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:21.036 08:47:36 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:21.036 08:47:36 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:21.036 08:47:36 rpc -- scripts/common.sh@368 -- # return 0 00:03:21.036 08:47:36 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:21.036 08:47:36 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:21.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.036 --rc genhtml_branch_coverage=1 00:03:21.036 --rc genhtml_function_coverage=1 00:03:21.036 --rc genhtml_legend=1 00:03:21.036 --rc geninfo_all_blocks=1 00:03:21.036 --rc geninfo_unexecuted_blocks=1 00:03:21.036 00:03:21.036 ' 00:03:21.036 08:47:36 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:21.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.036 --rc genhtml_branch_coverage=1 00:03:21.036 --rc genhtml_function_coverage=1 00:03:21.036 --rc genhtml_legend=1 00:03:21.036 --rc geninfo_all_blocks=1 00:03:21.036 --rc geninfo_unexecuted_blocks=1 00:03:21.036 00:03:21.036 ' 00:03:21.036 08:47:36 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:21.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.036 --rc genhtml_branch_coverage=1 00:03:21.036 --rc genhtml_function_coverage=1 00:03:21.036 --rc genhtml_legend=1 00:03:21.036 --rc geninfo_all_blocks=1 00:03:21.036 --rc geninfo_unexecuted_blocks=1 00:03:21.036 00:03:21.036 ' 00:03:21.036 08:47:36 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:21.036 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:21.036 --rc genhtml_branch_coverage=1 00:03:21.036 --rc genhtml_function_coverage=1 00:03:21.036 --rc genhtml_legend=1 00:03:21.036 --rc geninfo_all_blocks=1 00:03:21.036 --rc geninfo_unexecuted_blocks=1 00:03:21.036 00:03:21.036 ' 00:03:21.036 08:47:36 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2140823 00:03:21.036 08:47:36 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:21.036 08:47:36 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:03:21.036 08:47:36 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2140823 00:03:21.036 08:47:36 rpc -- common/autotest_common.sh@835 -- # '[' -z 2140823 ']' 00:03:21.036 08:47:37 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:21.036 08:47:37 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:21.036 08:47:37 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:21.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:21.036 08:47:37 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:21.036 08:47:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:21.036 [2024-11-20 08:47:37.053437] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:03:21.036 [2024-11-20 08:47:37.053486] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2140823 ] 00:03:21.296 [2024-11-20 08:47:37.120476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:21.296 [2024-11-20 08:47:37.162405] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:03:21.296 [2024-11-20 08:47:37.162446] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2140823' to capture a snapshot of events at runtime. 00:03:21.296 [2024-11-20 08:47:37.162453] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:03:21.296 [2024-11-20 08:47:37.162459] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:03:21.296 [2024-11-20 08:47:37.162463] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2140823 for offline analysis/debug. 00:03:21.296 [2024-11-20 08:47:37.163011] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:21.555 08:47:37 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:21.555 08:47:37 rpc -- common/autotest_common.sh@868 -- # return 0 00:03:21.555 08:47:37 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:21.555 08:47:37 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:21.555 08:47:37 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:03:21.555 08:47:37 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:03:21.555 08:47:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:21.555 08:47:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:21.555 08:47:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:21.555 ************************************ 00:03:21.555 START TEST rpc_integrity 00:03:21.555 ************************************ 00:03:21.555 08:47:37 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:21.555 08:47:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:21.555 08:47:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:21.555 08:47:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:21.555 08:47:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:21.555 08:47:37 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:21.555 08:47:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:21.555 08:47:37 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:21.555 08:47:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:21.555 08:47:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:21.555 08:47:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:21.555 08:47:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:21.555 08:47:37 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:03:21.555 08:47:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:21.555 08:47:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:21.555 08:47:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:21.555 08:47:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:21.555 08:47:37 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:21.555 { 00:03:21.555 "name": "Malloc0", 00:03:21.555 "aliases": [ 00:03:21.555 "7c2172a5-28c9-4600-ad44-7020f28c4bfa" 00:03:21.555 ], 00:03:21.555 "product_name": "Malloc disk", 00:03:21.555 "block_size": 512, 00:03:21.555 "num_blocks": 16384, 00:03:21.555 "uuid": "7c2172a5-28c9-4600-ad44-7020f28c4bfa", 00:03:21.555 "assigned_rate_limits": { 00:03:21.555 "rw_ios_per_sec": 0, 00:03:21.555 "rw_mbytes_per_sec": 0, 00:03:21.555 "r_mbytes_per_sec": 0, 00:03:21.555 "w_mbytes_per_sec": 0 00:03:21.555 }, 00:03:21.555 "claimed": false, 00:03:21.555 "zoned": false, 00:03:21.555 "supported_io_types": { 00:03:21.555 "read": true, 00:03:21.555 "write": true, 00:03:21.555 "unmap": true, 00:03:21.555 "flush": true, 00:03:21.555 "reset": true, 00:03:21.555 "nvme_admin": false, 00:03:21.555 "nvme_io": false, 00:03:21.555 "nvme_io_md": false, 00:03:21.555 "write_zeroes": true, 00:03:21.555 "zcopy": true, 00:03:21.555 "get_zone_info": false, 00:03:21.555 "zone_management": false, 00:03:21.555 "zone_append": false, 00:03:21.555 "compare": false, 00:03:21.555 "compare_and_write": false, 00:03:21.555 "abort": true, 00:03:21.555 "seek_hole": false, 00:03:21.555 "seek_data": false, 00:03:21.555 "copy": true, 00:03:21.555 "nvme_iov_md": false 00:03:21.555 }, 00:03:21.555 "memory_domains": [ 00:03:21.555 { 00:03:21.555 "dma_device_id": "system", 00:03:21.555 "dma_device_type": 1 00:03:21.555 }, 00:03:21.555 { 00:03:21.555 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:21.555 "dma_device_type": 2 00:03:21.555 } 00:03:21.555 ], 00:03:21.555 "driver_specific": {} 00:03:21.555 } 00:03:21.555 ]' 00:03:21.555 08:47:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:21.555 08:47:37 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:21.555 08:47:37 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:03:21.555 08:47:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:21.555 08:47:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:21.555 [2024-11-20 08:47:37.548423] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:03:21.555 [2024-11-20 08:47:37.548454] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:21.556 [2024-11-20 08:47:37.548467] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xda96e0 00:03:21.556 [2024-11-20 08:47:37.548474] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:21.556 [2024-11-20 08:47:37.549583] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:21.556 [2024-11-20 08:47:37.549605] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:21.556 Passthru0 00:03:21.556 08:47:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:21.556 08:47:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:21.556 08:47:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:21.556 08:47:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:21.556 08:47:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:21.556 08:47:37 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:21.556 { 00:03:21.556 "name": "Malloc0", 00:03:21.556 "aliases": [ 00:03:21.556 "7c2172a5-28c9-4600-ad44-7020f28c4bfa" 00:03:21.556 ], 00:03:21.556 "product_name": "Malloc disk", 00:03:21.556 "block_size": 512, 00:03:21.556 "num_blocks": 16384, 00:03:21.556 "uuid": "7c2172a5-28c9-4600-ad44-7020f28c4bfa", 00:03:21.556 "assigned_rate_limits": { 00:03:21.556 "rw_ios_per_sec": 0, 00:03:21.556 "rw_mbytes_per_sec": 0, 00:03:21.556 "r_mbytes_per_sec": 0, 00:03:21.556 "w_mbytes_per_sec": 0 00:03:21.556 }, 00:03:21.556 "claimed": true, 00:03:21.556 "claim_type": "exclusive_write", 00:03:21.556 "zoned": false, 00:03:21.556 "supported_io_types": { 00:03:21.556 "read": true, 00:03:21.556 "write": true, 00:03:21.556 "unmap": true, 00:03:21.556 "flush": true, 00:03:21.556 "reset": true, 00:03:21.556 "nvme_admin": false, 00:03:21.556 "nvme_io": false, 00:03:21.556 "nvme_io_md": false, 00:03:21.556 "write_zeroes": true, 00:03:21.556 "zcopy": true, 00:03:21.556 "get_zone_info": false, 00:03:21.556 "zone_management": false, 00:03:21.556 "zone_append": false, 00:03:21.556 "compare": false, 00:03:21.556 "compare_and_write": false, 00:03:21.556 "abort": true, 00:03:21.556 "seek_hole": false, 00:03:21.556 "seek_data": false, 00:03:21.556 "copy": true, 00:03:21.556 "nvme_iov_md": false 00:03:21.556 }, 00:03:21.556 "memory_domains": [ 00:03:21.556 { 00:03:21.556 "dma_device_id": "system", 00:03:21.556 "dma_device_type": 1 00:03:21.556 }, 00:03:21.556 { 00:03:21.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:21.556 "dma_device_type": 2 00:03:21.556 } 00:03:21.556 ], 00:03:21.556 "driver_specific": {} 00:03:21.556 }, 00:03:21.556 { 00:03:21.556 "name": "Passthru0", 00:03:21.556 "aliases": [ 00:03:21.556 "0fb6bf5c-cf29-57d0-93c7-76154f9f10ae" 00:03:21.556 ], 00:03:21.556 "product_name": "passthru", 00:03:21.556 "block_size": 512, 00:03:21.556 "num_blocks": 16384, 00:03:21.556 "uuid": "0fb6bf5c-cf29-57d0-93c7-76154f9f10ae", 00:03:21.556 "assigned_rate_limits": { 00:03:21.556 "rw_ios_per_sec": 0, 00:03:21.556 "rw_mbytes_per_sec": 0, 00:03:21.556 "r_mbytes_per_sec": 0, 00:03:21.556 "w_mbytes_per_sec": 0 00:03:21.556 }, 00:03:21.556 "claimed": false, 00:03:21.556 "zoned": false, 00:03:21.556 "supported_io_types": { 00:03:21.556 "read": true, 00:03:21.556 "write": true, 00:03:21.556 "unmap": true, 00:03:21.556 "flush": true, 00:03:21.556 "reset": true, 00:03:21.556 "nvme_admin": false, 00:03:21.556 "nvme_io": false, 00:03:21.556 "nvme_io_md": false, 00:03:21.556 "write_zeroes": true, 00:03:21.556 "zcopy": true, 00:03:21.556 "get_zone_info": false, 00:03:21.556 "zone_management": false, 00:03:21.556 "zone_append": false, 00:03:21.556 "compare": false, 00:03:21.556 "compare_and_write": false, 00:03:21.556 "abort": true, 00:03:21.556 "seek_hole": false, 00:03:21.556 "seek_data": false, 00:03:21.556 "copy": true, 00:03:21.556 "nvme_iov_md": false 00:03:21.556 }, 00:03:21.556 "memory_domains": [ 00:03:21.556 { 00:03:21.556 "dma_device_id": "system", 00:03:21.556 "dma_device_type": 1 00:03:21.556 }, 00:03:21.556 { 00:03:21.556 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:21.556 "dma_device_type": 2 00:03:21.556 } 00:03:21.556 ], 00:03:21.556 "driver_specific": { 00:03:21.556 "passthru": { 00:03:21.556 "name": "Passthru0", 00:03:21.556 "base_bdev_name": "Malloc0" 00:03:21.556 } 00:03:21.556 } 00:03:21.556 } 00:03:21.556 ]' 00:03:21.556 08:47:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:21.815 08:47:37 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:21.815 08:47:37 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:21.815 08:47:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:21.815 08:47:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:21.815 08:47:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:21.815 08:47:37 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:03:21.815 08:47:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:21.815 08:47:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:21.815 08:47:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:21.815 08:47:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:21.815 08:47:37 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:21.815 08:47:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:21.815 08:47:37 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:21.815 08:47:37 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:21.815 08:47:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:21.815 08:47:37 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:21.815 00:03:21.815 real 0m0.285s 00:03:21.815 user 0m0.186s 00:03:21.815 sys 0m0.034s 00:03:21.815 08:47:37 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:21.815 08:47:37 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:21.815 ************************************ 00:03:21.815 END TEST rpc_integrity 00:03:21.815 ************************************ 00:03:21.815 08:47:37 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:03:21.815 08:47:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:21.815 08:47:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:21.815 08:47:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:21.815 ************************************ 00:03:21.815 START TEST rpc_plugins 00:03:21.815 ************************************ 00:03:21.815 08:47:37 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:03:21.815 08:47:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:03:21.815 08:47:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:21.815 08:47:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:21.815 08:47:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:21.815 08:47:37 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:03:21.815 08:47:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:03:21.815 08:47:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:21.815 08:47:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:21.815 08:47:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:21.815 08:47:37 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:03:21.815 { 00:03:21.815 "name": "Malloc1", 00:03:21.815 "aliases": [ 00:03:21.816 "0c9315ba-ccbe-4dbd-96d9-8835afb668be" 00:03:21.816 ], 00:03:21.816 "product_name": "Malloc disk", 00:03:21.816 "block_size": 4096, 00:03:21.816 "num_blocks": 256, 00:03:21.816 "uuid": "0c9315ba-ccbe-4dbd-96d9-8835afb668be", 00:03:21.816 "assigned_rate_limits": { 00:03:21.816 "rw_ios_per_sec": 0, 00:03:21.816 "rw_mbytes_per_sec": 0, 00:03:21.816 "r_mbytes_per_sec": 0, 00:03:21.816 "w_mbytes_per_sec": 0 00:03:21.816 }, 00:03:21.816 "claimed": false, 00:03:21.816 "zoned": false, 00:03:21.816 "supported_io_types": { 00:03:21.816 "read": true, 00:03:21.816 "write": true, 00:03:21.816 "unmap": true, 00:03:21.816 "flush": true, 00:03:21.816 "reset": true, 00:03:21.816 "nvme_admin": false, 00:03:21.816 "nvme_io": false, 00:03:21.816 "nvme_io_md": false, 00:03:21.816 "write_zeroes": true, 00:03:21.816 "zcopy": true, 00:03:21.816 "get_zone_info": false, 00:03:21.816 "zone_management": false, 00:03:21.816 "zone_append": false, 00:03:21.816 "compare": false, 00:03:21.816 "compare_and_write": false, 00:03:21.816 "abort": true, 00:03:21.816 "seek_hole": false, 00:03:21.816 "seek_data": false, 00:03:21.816 "copy": true, 00:03:21.816 "nvme_iov_md": false 00:03:21.816 }, 00:03:21.816 "memory_domains": [ 00:03:21.816 { 00:03:21.816 "dma_device_id": "system", 00:03:21.816 "dma_device_type": 1 00:03:21.816 }, 00:03:21.816 { 00:03:21.816 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:21.816 "dma_device_type": 2 00:03:21.816 } 00:03:21.816 ], 00:03:21.816 "driver_specific": {} 00:03:21.816 } 00:03:21.816 ]' 00:03:21.816 08:47:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:03:21.816 08:47:37 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:03:21.816 08:47:37 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:03:21.816 08:47:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:21.816 08:47:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:21.816 08:47:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:21.816 08:47:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:03:21.816 08:47:37 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:21.816 08:47:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:22.075 08:47:37 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:22.075 08:47:37 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:03:22.075 08:47:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:03:22.075 08:47:37 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:03:22.075 00:03:22.075 real 0m0.140s 00:03:22.075 user 0m0.084s 00:03:22.075 sys 0m0.020s 00:03:22.075 08:47:37 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:22.075 08:47:37 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:03:22.075 ************************************ 00:03:22.075 END TEST rpc_plugins 00:03:22.075 ************************************ 00:03:22.075 08:47:37 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:03:22.075 08:47:37 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:22.075 08:47:37 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:22.075 08:47:37 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:22.075 ************************************ 00:03:22.075 START TEST rpc_trace_cmd_test 00:03:22.075 ************************************ 00:03:22.075 08:47:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:03:22.075 08:47:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:03:22.075 08:47:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:03:22.075 08:47:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:22.075 08:47:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:22.075 08:47:37 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:22.075 08:47:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:03:22.075 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2140823", 00:03:22.075 "tpoint_group_mask": "0x8", 00:03:22.075 "iscsi_conn": { 00:03:22.075 "mask": "0x2", 00:03:22.075 "tpoint_mask": "0x0" 00:03:22.075 }, 00:03:22.075 "scsi": { 00:03:22.075 "mask": "0x4", 00:03:22.075 "tpoint_mask": "0x0" 00:03:22.075 }, 00:03:22.075 "bdev": { 00:03:22.075 "mask": "0x8", 00:03:22.075 "tpoint_mask": "0xffffffffffffffff" 00:03:22.075 }, 00:03:22.075 "nvmf_rdma": { 00:03:22.075 "mask": "0x10", 00:03:22.075 "tpoint_mask": "0x0" 00:03:22.075 }, 00:03:22.075 "nvmf_tcp": { 00:03:22.075 "mask": "0x20", 00:03:22.075 "tpoint_mask": "0x0" 00:03:22.075 }, 00:03:22.075 "ftl": { 00:03:22.075 "mask": "0x40", 00:03:22.075 "tpoint_mask": "0x0" 00:03:22.075 }, 00:03:22.075 "blobfs": { 00:03:22.075 "mask": "0x80", 00:03:22.075 "tpoint_mask": "0x0" 00:03:22.075 }, 00:03:22.075 "dsa": { 00:03:22.075 "mask": "0x200", 00:03:22.075 "tpoint_mask": "0x0" 00:03:22.075 }, 00:03:22.075 "thread": { 00:03:22.075 "mask": "0x400", 00:03:22.075 "tpoint_mask": "0x0" 00:03:22.075 }, 00:03:22.075 "nvme_pcie": { 00:03:22.075 "mask": "0x800", 00:03:22.075 "tpoint_mask": "0x0" 00:03:22.075 }, 00:03:22.075 "iaa": { 00:03:22.075 "mask": "0x1000", 00:03:22.075 "tpoint_mask": "0x0" 00:03:22.075 }, 00:03:22.075 "nvme_tcp": { 00:03:22.075 "mask": "0x2000", 00:03:22.075 "tpoint_mask": "0x0" 00:03:22.075 }, 00:03:22.075 "bdev_nvme": { 00:03:22.075 "mask": "0x4000", 00:03:22.075 "tpoint_mask": "0x0" 00:03:22.075 }, 00:03:22.075 "sock": { 00:03:22.075 "mask": "0x8000", 00:03:22.075 "tpoint_mask": "0x0" 00:03:22.075 }, 00:03:22.075 "blob": { 00:03:22.075 "mask": "0x10000", 00:03:22.075 "tpoint_mask": "0x0" 00:03:22.075 }, 00:03:22.075 "bdev_raid": { 00:03:22.075 "mask": "0x20000", 00:03:22.075 "tpoint_mask": "0x0" 00:03:22.075 }, 00:03:22.075 "scheduler": { 00:03:22.075 "mask": "0x40000", 00:03:22.075 "tpoint_mask": "0x0" 00:03:22.075 } 00:03:22.075 }' 00:03:22.075 08:47:37 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:03:22.075 08:47:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:03:22.075 08:47:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:03:22.075 08:47:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:03:22.075 08:47:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:03:22.075 08:47:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:03:22.075 08:47:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:03:22.335 08:47:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:03:22.335 08:47:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:03:22.335 08:47:38 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:03:22.335 00:03:22.335 real 0m0.205s 00:03:22.335 user 0m0.176s 00:03:22.335 sys 0m0.020s 00:03:22.335 08:47:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:22.335 08:47:38 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:03:22.335 ************************************ 00:03:22.335 END TEST rpc_trace_cmd_test 00:03:22.335 ************************************ 00:03:22.335 08:47:38 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:03:22.335 08:47:38 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:03:22.335 08:47:38 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:03:22.335 08:47:38 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:22.335 08:47:38 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:22.335 08:47:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:22.335 ************************************ 00:03:22.335 START TEST rpc_daemon_integrity 00:03:22.335 ************************************ 00:03:22.335 08:47:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:03:22.335 08:47:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:03:22.335 08:47:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:22.335 08:47:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:22.335 08:47:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:22.335 08:47:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:03:22.335 08:47:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:03:22.335 08:47:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:03:22.335 08:47:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:03:22.335 08:47:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:22.335 08:47:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:22.335 08:47:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:22.335 08:47:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:03:22.335 08:47:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:03:22.335 08:47:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:22.335 08:47:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:22.335 08:47:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:22.335 08:47:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:03:22.335 { 00:03:22.335 "name": "Malloc2", 00:03:22.335 "aliases": [ 00:03:22.335 "ed609bdd-1ee3-4d66-98fe-4a51dea5e6e6" 00:03:22.335 ], 00:03:22.335 "product_name": "Malloc disk", 00:03:22.335 "block_size": 512, 00:03:22.335 "num_blocks": 16384, 00:03:22.335 "uuid": "ed609bdd-1ee3-4d66-98fe-4a51dea5e6e6", 00:03:22.335 "assigned_rate_limits": { 00:03:22.335 "rw_ios_per_sec": 0, 00:03:22.335 "rw_mbytes_per_sec": 0, 00:03:22.335 "r_mbytes_per_sec": 0, 00:03:22.335 "w_mbytes_per_sec": 0 00:03:22.335 }, 00:03:22.335 "claimed": false, 00:03:22.335 "zoned": false, 00:03:22.335 "supported_io_types": { 00:03:22.335 "read": true, 00:03:22.335 "write": true, 00:03:22.335 "unmap": true, 00:03:22.335 "flush": true, 00:03:22.335 "reset": true, 00:03:22.335 "nvme_admin": false, 00:03:22.335 "nvme_io": false, 00:03:22.335 "nvme_io_md": false, 00:03:22.335 "write_zeroes": true, 00:03:22.335 "zcopy": true, 00:03:22.335 "get_zone_info": false, 00:03:22.335 "zone_management": false, 00:03:22.335 "zone_append": false, 00:03:22.335 "compare": false, 00:03:22.335 "compare_and_write": false, 00:03:22.335 "abort": true, 00:03:22.335 "seek_hole": false, 00:03:22.335 "seek_data": false, 00:03:22.335 "copy": true, 00:03:22.335 "nvme_iov_md": false 00:03:22.335 }, 00:03:22.335 "memory_domains": [ 00:03:22.335 { 00:03:22.335 "dma_device_id": "system", 00:03:22.335 "dma_device_type": 1 00:03:22.335 }, 00:03:22.335 { 00:03:22.335 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:22.335 "dma_device_type": 2 00:03:22.335 } 00:03:22.335 ], 00:03:22.335 "driver_specific": {} 00:03:22.335 } 00:03:22.335 ]' 00:03:22.335 08:47:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:03:22.335 08:47:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:03:22.335 08:47:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:03:22.335 08:47:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:22.335 08:47:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:22.595 [2024-11-20 08:47:38.378678] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:03:22.595 [2024-11-20 08:47:38.378707] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:03:22.595 [2024-11-20 08:47:38.378719] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0xe39b70 00:03:22.595 [2024-11-20 08:47:38.378725] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:03:22.595 [2024-11-20 08:47:38.379703] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:03:22.595 [2024-11-20 08:47:38.379724] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:03:22.595 Passthru0 00:03:22.595 08:47:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:22.595 08:47:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:03:22.595 08:47:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:22.595 08:47:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:22.595 08:47:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:22.595 08:47:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:03:22.595 { 00:03:22.595 "name": "Malloc2", 00:03:22.595 "aliases": [ 00:03:22.595 "ed609bdd-1ee3-4d66-98fe-4a51dea5e6e6" 00:03:22.595 ], 00:03:22.595 "product_name": "Malloc disk", 00:03:22.595 "block_size": 512, 00:03:22.595 "num_blocks": 16384, 00:03:22.595 "uuid": "ed609bdd-1ee3-4d66-98fe-4a51dea5e6e6", 00:03:22.595 "assigned_rate_limits": { 00:03:22.595 "rw_ios_per_sec": 0, 00:03:22.595 "rw_mbytes_per_sec": 0, 00:03:22.595 "r_mbytes_per_sec": 0, 00:03:22.595 "w_mbytes_per_sec": 0 00:03:22.595 }, 00:03:22.595 "claimed": true, 00:03:22.595 "claim_type": "exclusive_write", 00:03:22.595 "zoned": false, 00:03:22.595 "supported_io_types": { 00:03:22.595 "read": true, 00:03:22.595 "write": true, 00:03:22.595 "unmap": true, 00:03:22.595 "flush": true, 00:03:22.595 "reset": true, 00:03:22.595 "nvme_admin": false, 00:03:22.595 "nvme_io": false, 00:03:22.595 "nvme_io_md": false, 00:03:22.595 "write_zeroes": true, 00:03:22.595 "zcopy": true, 00:03:22.595 "get_zone_info": false, 00:03:22.595 "zone_management": false, 00:03:22.595 "zone_append": false, 00:03:22.595 "compare": false, 00:03:22.595 "compare_and_write": false, 00:03:22.595 "abort": true, 00:03:22.595 "seek_hole": false, 00:03:22.595 "seek_data": false, 00:03:22.595 "copy": true, 00:03:22.595 "nvme_iov_md": false 00:03:22.595 }, 00:03:22.595 "memory_domains": [ 00:03:22.595 { 00:03:22.595 "dma_device_id": "system", 00:03:22.595 "dma_device_type": 1 00:03:22.595 }, 00:03:22.595 { 00:03:22.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:22.595 "dma_device_type": 2 00:03:22.595 } 00:03:22.595 ], 00:03:22.595 "driver_specific": {} 00:03:22.595 }, 00:03:22.595 { 00:03:22.595 "name": "Passthru0", 00:03:22.595 "aliases": [ 00:03:22.595 "12499ca1-b6d4-524e-bc14-2426d11bcbad" 00:03:22.595 ], 00:03:22.595 "product_name": "passthru", 00:03:22.595 "block_size": 512, 00:03:22.595 "num_blocks": 16384, 00:03:22.595 "uuid": "12499ca1-b6d4-524e-bc14-2426d11bcbad", 00:03:22.595 "assigned_rate_limits": { 00:03:22.595 "rw_ios_per_sec": 0, 00:03:22.595 "rw_mbytes_per_sec": 0, 00:03:22.595 "r_mbytes_per_sec": 0, 00:03:22.595 "w_mbytes_per_sec": 0 00:03:22.595 }, 00:03:22.595 "claimed": false, 00:03:22.595 "zoned": false, 00:03:22.595 "supported_io_types": { 00:03:22.595 "read": true, 00:03:22.595 "write": true, 00:03:22.595 "unmap": true, 00:03:22.595 "flush": true, 00:03:22.595 "reset": true, 00:03:22.595 "nvme_admin": false, 00:03:22.595 "nvme_io": false, 00:03:22.595 "nvme_io_md": false, 00:03:22.595 "write_zeroes": true, 00:03:22.595 "zcopy": true, 00:03:22.595 "get_zone_info": false, 00:03:22.595 "zone_management": false, 00:03:22.595 "zone_append": false, 00:03:22.595 "compare": false, 00:03:22.595 "compare_and_write": false, 00:03:22.595 "abort": true, 00:03:22.595 "seek_hole": false, 00:03:22.595 "seek_data": false, 00:03:22.595 "copy": true, 00:03:22.595 "nvme_iov_md": false 00:03:22.595 }, 00:03:22.595 "memory_domains": [ 00:03:22.595 { 00:03:22.595 "dma_device_id": "system", 00:03:22.595 "dma_device_type": 1 00:03:22.595 }, 00:03:22.595 { 00:03:22.595 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:03:22.595 "dma_device_type": 2 00:03:22.595 } 00:03:22.595 ], 00:03:22.595 "driver_specific": { 00:03:22.595 "passthru": { 00:03:22.595 "name": "Passthru0", 00:03:22.595 "base_bdev_name": "Malloc2" 00:03:22.595 } 00:03:22.595 } 00:03:22.595 } 00:03:22.595 ]' 00:03:22.595 08:47:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:03:22.595 08:47:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:03:22.595 08:47:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:03:22.595 08:47:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:22.595 08:47:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:22.595 08:47:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:22.595 08:47:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:03:22.595 08:47:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:22.595 08:47:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:22.595 08:47:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:22.595 08:47:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:03:22.595 08:47:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:22.595 08:47:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:22.595 08:47:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:22.595 08:47:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:03:22.595 08:47:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:03:22.595 08:47:38 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:03:22.595 00:03:22.595 real 0m0.282s 00:03:22.595 user 0m0.185s 00:03:22.595 sys 0m0.031s 00:03:22.595 08:47:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:22.595 08:47:38 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:03:22.595 ************************************ 00:03:22.595 END TEST rpc_daemon_integrity 00:03:22.595 ************************************ 00:03:22.595 08:47:38 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:03:22.595 08:47:38 rpc -- rpc/rpc.sh@84 -- # killprocess 2140823 00:03:22.595 08:47:38 rpc -- common/autotest_common.sh@954 -- # '[' -z 2140823 ']' 00:03:22.595 08:47:38 rpc -- common/autotest_common.sh@958 -- # kill -0 2140823 00:03:22.595 08:47:38 rpc -- common/autotest_common.sh@959 -- # uname 00:03:22.595 08:47:38 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:22.595 08:47:38 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2140823 00:03:22.595 08:47:38 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:22.595 08:47:38 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:22.595 08:47:38 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2140823' 00:03:22.595 killing process with pid 2140823 00:03:22.596 08:47:38 rpc -- common/autotest_common.sh@973 -- # kill 2140823 00:03:22.596 08:47:38 rpc -- common/autotest_common.sh@978 -- # wait 2140823 00:03:23.164 00:03:23.164 real 0m2.082s 00:03:23.164 user 0m2.665s 00:03:23.164 sys 0m0.686s 00:03:23.164 08:47:38 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:23.164 08:47:38 rpc -- common/autotest_common.sh@10 -- # set +x 00:03:23.164 ************************************ 00:03:23.164 END TEST rpc 00:03:23.164 ************************************ 00:03:23.164 08:47:38 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:23.164 08:47:38 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:23.164 08:47:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:23.164 08:47:38 -- common/autotest_common.sh@10 -- # set +x 00:03:23.164 ************************************ 00:03:23.164 START TEST skip_rpc 00:03:23.164 ************************************ 00:03:23.164 08:47:38 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:03:23.164 * Looking for test storage... 00:03:23.164 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:03:23.164 08:47:39 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:23.164 08:47:39 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:23.164 08:47:39 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:23.164 08:47:39 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:23.164 08:47:39 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:23.164 08:47:39 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:23.164 08:47:39 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:23.164 08:47:39 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:23.164 08:47:39 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:23.164 08:47:39 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:23.164 08:47:39 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:23.164 08:47:39 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:23.164 08:47:39 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:23.164 08:47:39 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:23.164 08:47:39 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:23.164 08:47:39 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:23.164 08:47:39 skip_rpc -- scripts/common.sh@345 -- # : 1 00:03:23.164 08:47:39 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:23.164 08:47:39 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:23.164 08:47:39 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:23.164 08:47:39 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:03:23.164 08:47:39 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:23.164 08:47:39 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:03:23.164 08:47:39 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:23.164 08:47:39 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:23.164 08:47:39 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:03:23.164 08:47:39 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:23.164 08:47:39 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:03:23.164 08:47:39 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:23.164 08:47:39 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:23.164 08:47:39 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:23.164 08:47:39 skip_rpc -- scripts/common.sh@368 -- # return 0 00:03:23.164 08:47:39 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:23.164 08:47:39 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:23.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:23.164 --rc genhtml_branch_coverage=1 00:03:23.164 --rc genhtml_function_coverage=1 00:03:23.164 --rc genhtml_legend=1 00:03:23.164 --rc geninfo_all_blocks=1 00:03:23.164 --rc geninfo_unexecuted_blocks=1 00:03:23.164 00:03:23.164 ' 00:03:23.164 08:47:39 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:23.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:23.164 --rc genhtml_branch_coverage=1 00:03:23.164 --rc genhtml_function_coverage=1 00:03:23.164 --rc genhtml_legend=1 00:03:23.164 --rc geninfo_all_blocks=1 00:03:23.164 --rc geninfo_unexecuted_blocks=1 00:03:23.164 00:03:23.164 ' 00:03:23.164 08:47:39 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:23.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:23.164 --rc genhtml_branch_coverage=1 00:03:23.164 --rc genhtml_function_coverage=1 00:03:23.164 --rc genhtml_legend=1 00:03:23.164 --rc geninfo_all_blocks=1 00:03:23.164 --rc geninfo_unexecuted_blocks=1 00:03:23.164 00:03:23.164 ' 00:03:23.164 08:47:39 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:23.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:23.164 --rc genhtml_branch_coverage=1 00:03:23.164 --rc genhtml_function_coverage=1 00:03:23.164 --rc genhtml_legend=1 00:03:23.164 --rc geninfo_all_blocks=1 00:03:23.164 --rc geninfo_unexecuted_blocks=1 00:03:23.164 00:03:23.164 ' 00:03:23.164 08:47:39 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:23.164 08:47:39 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:23.164 08:47:39 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:03:23.164 08:47:39 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:23.164 08:47:39 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:23.164 08:47:39 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:23.164 ************************************ 00:03:23.164 START TEST skip_rpc 00:03:23.165 ************************************ 00:03:23.165 08:47:39 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:03:23.165 08:47:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2141464 00:03:23.165 08:47:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:03:23.165 08:47:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:23.165 08:47:39 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:03:23.423 [2024-11-20 08:47:39.247164] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:03:23.423 [2024-11-20 08:47:39.247205] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2141464 ] 00:03:23.423 [2024-11-20 08:47:39.322930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:23.423 [2024-11-20 08:47:39.363202] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:28.697 08:47:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:03:28.697 08:47:44 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:03:28.697 08:47:44 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:03:28.697 08:47:44 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:03:28.697 08:47:44 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:28.697 08:47:44 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:03:28.697 08:47:44 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:28.697 08:47:44 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:03:28.697 08:47:44 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:28.697 08:47:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:28.697 08:47:44 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:28.697 08:47:44 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:03:28.698 08:47:44 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:28.698 08:47:44 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:28.698 08:47:44 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:28.698 08:47:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:03:28.698 08:47:44 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2141464 00:03:28.698 08:47:44 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 2141464 ']' 00:03:28.698 08:47:44 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 2141464 00:03:28.698 08:47:44 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:03:28.698 08:47:44 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:28.698 08:47:44 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2141464 00:03:28.698 08:47:44 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:28.698 08:47:44 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:28.698 08:47:44 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2141464' 00:03:28.698 killing process with pid 2141464 00:03:28.698 08:47:44 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 2141464 00:03:28.698 08:47:44 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 2141464 00:03:28.698 00:03:28.698 real 0m5.367s 00:03:28.698 user 0m5.127s 00:03:28.698 sys 0m0.274s 00:03:28.698 08:47:44 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:28.698 08:47:44 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:28.698 ************************************ 00:03:28.698 END TEST skip_rpc 00:03:28.698 ************************************ 00:03:28.698 08:47:44 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:03:28.698 08:47:44 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:28.698 08:47:44 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:28.698 08:47:44 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:28.698 ************************************ 00:03:28.698 START TEST skip_rpc_with_json 00:03:28.698 ************************************ 00:03:28.698 08:47:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:03:28.698 08:47:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:03:28.698 08:47:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2142406 00:03:28.698 08:47:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:28.698 08:47:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:28.698 08:47:44 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2142406 00:03:28.698 08:47:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 2142406 ']' 00:03:28.698 08:47:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:28.698 08:47:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:28.698 08:47:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:28.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:28.698 08:47:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:28.698 08:47:44 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:28.698 [2024-11-20 08:47:44.687357] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:03:28.698 [2024-11-20 08:47:44.687399] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2142406 ] 00:03:28.957 [2024-11-20 08:47:44.764145] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:28.957 [2024-11-20 08:47:44.806468] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:29.216 08:47:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:29.216 08:47:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:03:29.216 08:47:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:03:29.216 08:47:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:29.216 08:47:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:29.216 [2024-11-20 08:47:45.013511] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:03:29.216 request: 00:03:29.216 { 00:03:29.216 "trtype": "tcp", 00:03:29.216 "method": "nvmf_get_transports", 00:03:29.216 "req_id": 1 00:03:29.216 } 00:03:29.216 Got JSON-RPC error response 00:03:29.216 response: 00:03:29.216 { 00:03:29.216 "code": -19, 00:03:29.216 "message": "No such device" 00:03:29.216 } 00:03:29.216 08:47:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:03:29.216 08:47:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:03:29.216 08:47:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:29.216 08:47:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:29.216 [2024-11-20 08:47:45.025622] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:29.216 08:47:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:29.216 08:47:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:03:29.216 08:47:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:29.217 08:47:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:29.217 08:47:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:29.217 08:47:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:29.217 { 00:03:29.217 "subsystems": [ 00:03:29.217 { 00:03:29.217 "subsystem": "fsdev", 00:03:29.217 "config": [ 00:03:29.217 { 00:03:29.217 "method": "fsdev_set_opts", 00:03:29.217 "params": { 00:03:29.217 "fsdev_io_pool_size": 65535, 00:03:29.217 "fsdev_io_cache_size": 256 00:03:29.217 } 00:03:29.217 } 00:03:29.217 ] 00:03:29.217 }, 00:03:29.217 { 00:03:29.217 "subsystem": "vfio_user_target", 00:03:29.217 "config": null 00:03:29.217 }, 00:03:29.217 { 00:03:29.217 "subsystem": "keyring", 00:03:29.217 "config": [] 00:03:29.217 }, 00:03:29.217 { 00:03:29.217 "subsystem": "iobuf", 00:03:29.217 "config": [ 00:03:29.217 { 00:03:29.217 "method": "iobuf_set_options", 00:03:29.217 "params": { 00:03:29.217 "small_pool_count": 8192, 00:03:29.217 "large_pool_count": 1024, 00:03:29.217 "small_bufsize": 8192, 00:03:29.217 "large_bufsize": 135168, 00:03:29.217 "enable_numa": false 00:03:29.217 } 00:03:29.217 } 00:03:29.217 ] 00:03:29.217 }, 00:03:29.217 { 00:03:29.217 "subsystem": "sock", 00:03:29.217 "config": [ 00:03:29.217 { 00:03:29.217 "method": "sock_set_default_impl", 00:03:29.217 "params": { 00:03:29.217 "impl_name": "posix" 00:03:29.217 } 00:03:29.217 }, 00:03:29.217 { 00:03:29.217 "method": "sock_impl_set_options", 00:03:29.217 "params": { 00:03:29.217 "impl_name": "ssl", 00:03:29.217 "recv_buf_size": 4096, 00:03:29.217 "send_buf_size": 4096, 00:03:29.217 "enable_recv_pipe": true, 00:03:29.217 "enable_quickack": false, 00:03:29.217 "enable_placement_id": 0, 00:03:29.217 "enable_zerocopy_send_server": true, 00:03:29.217 "enable_zerocopy_send_client": false, 00:03:29.217 "zerocopy_threshold": 0, 00:03:29.217 "tls_version": 0, 00:03:29.217 "enable_ktls": false 00:03:29.217 } 00:03:29.217 }, 00:03:29.217 { 00:03:29.217 "method": "sock_impl_set_options", 00:03:29.217 "params": { 00:03:29.217 "impl_name": "posix", 00:03:29.217 "recv_buf_size": 2097152, 00:03:29.217 "send_buf_size": 2097152, 00:03:29.217 "enable_recv_pipe": true, 00:03:29.217 "enable_quickack": false, 00:03:29.217 "enable_placement_id": 0, 00:03:29.217 "enable_zerocopy_send_server": true, 00:03:29.217 "enable_zerocopy_send_client": false, 00:03:29.217 "zerocopy_threshold": 0, 00:03:29.217 "tls_version": 0, 00:03:29.217 "enable_ktls": false 00:03:29.217 } 00:03:29.217 } 00:03:29.217 ] 00:03:29.217 }, 00:03:29.217 { 00:03:29.217 "subsystem": "vmd", 00:03:29.217 "config": [] 00:03:29.217 }, 00:03:29.217 { 00:03:29.217 "subsystem": "accel", 00:03:29.217 "config": [ 00:03:29.217 { 00:03:29.217 "method": "accel_set_options", 00:03:29.217 "params": { 00:03:29.217 "small_cache_size": 128, 00:03:29.217 "large_cache_size": 16, 00:03:29.217 "task_count": 2048, 00:03:29.217 "sequence_count": 2048, 00:03:29.217 "buf_count": 2048 00:03:29.217 } 00:03:29.217 } 00:03:29.217 ] 00:03:29.217 }, 00:03:29.217 { 00:03:29.217 "subsystem": "bdev", 00:03:29.217 "config": [ 00:03:29.217 { 00:03:29.217 "method": "bdev_set_options", 00:03:29.217 "params": { 00:03:29.217 "bdev_io_pool_size": 65535, 00:03:29.217 "bdev_io_cache_size": 256, 00:03:29.217 "bdev_auto_examine": true, 00:03:29.217 "iobuf_small_cache_size": 128, 00:03:29.217 "iobuf_large_cache_size": 16 00:03:29.217 } 00:03:29.217 }, 00:03:29.217 { 00:03:29.217 "method": "bdev_raid_set_options", 00:03:29.217 "params": { 00:03:29.217 "process_window_size_kb": 1024, 00:03:29.217 "process_max_bandwidth_mb_sec": 0 00:03:29.217 } 00:03:29.217 }, 00:03:29.217 { 00:03:29.217 "method": "bdev_iscsi_set_options", 00:03:29.217 "params": { 00:03:29.217 "timeout_sec": 30 00:03:29.217 } 00:03:29.217 }, 00:03:29.217 { 00:03:29.217 "method": "bdev_nvme_set_options", 00:03:29.217 "params": { 00:03:29.217 "action_on_timeout": "none", 00:03:29.217 "timeout_us": 0, 00:03:29.217 "timeout_admin_us": 0, 00:03:29.217 "keep_alive_timeout_ms": 10000, 00:03:29.217 "arbitration_burst": 0, 00:03:29.217 "low_priority_weight": 0, 00:03:29.217 "medium_priority_weight": 0, 00:03:29.217 "high_priority_weight": 0, 00:03:29.217 "nvme_adminq_poll_period_us": 10000, 00:03:29.217 "nvme_ioq_poll_period_us": 0, 00:03:29.217 "io_queue_requests": 0, 00:03:29.217 "delay_cmd_submit": true, 00:03:29.217 "transport_retry_count": 4, 00:03:29.217 "bdev_retry_count": 3, 00:03:29.217 "transport_ack_timeout": 0, 00:03:29.217 "ctrlr_loss_timeout_sec": 0, 00:03:29.217 "reconnect_delay_sec": 0, 00:03:29.217 "fast_io_fail_timeout_sec": 0, 00:03:29.217 "disable_auto_failback": false, 00:03:29.217 "generate_uuids": false, 00:03:29.217 "transport_tos": 0, 00:03:29.217 "nvme_error_stat": false, 00:03:29.217 "rdma_srq_size": 0, 00:03:29.217 "io_path_stat": false, 00:03:29.217 "allow_accel_sequence": false, 00:03:29.217 "rdma_max_cq_size": 0, 00:03:29.217 "rdma_cm_event_timeout_ms": 0, 00:03:29.217 "dhchap_digests": [ 00:03:29.217 "sha256", 00:03:29.217 "sha384", 00:03:29.217 "sha512" 00:03:29.217 ], 00:03:29.217 "dhchap_dhgroups": [ 00:03:29.217 "null", 00:03:29.217 "ffdhe2048", 00:03:29.217 "ffdhe3072", 00:03:29.217 "ffdhe4096", 00:03:29.217 "ffdhe6144", 00:03:29.217 "ffdhe8192" 00:03:29.217 ] 00:03:29.217 } 00:03:29.217 }, 00:03:29.217 { 00:03:29.217 "method": "bdev_nvme_set_hotplug", 00:03:29.217 "params": { 00:03:29.217 "period_us": 100000, 00:03:29.217 "enable": false 00:03:29.217 } 00:03:29.217 }, 00:03:29.217 { 00:03:29.217 "method": "bdev_wait_for_examine" 00:03:29.217 } 00:03:29.217 ] 00:03:29.217 }, 00:03:29.217 { 00:03:29.217 "subsystem": "scsi", 00:03:29.217 "config": null 00:03:29.217 }, 00:03:29.217 { 00:03:29.217 "subsystem": "scheduler", 00:03:29.217 "config": [ 00:03:29.217 { 00:03:29.217 "method": "framework_set_scheduler", 00:03:29.217 "params": { 00:03:29.217 "name": "static" 00:03:29.217 } 00:03:29.217 } 00:03:29.217 ] 00:03:29.217 }, 00:03:29.217 { 00:03:29.217 "subsystem": "vhost_scsi", 00:03:29.217 "config": [] 00:03:29.217 }, 00:03:29.217 { 00:03:29.217 "subsystem": "vhost_blk", 00:03:29.217 "config": [] 00:03:29.217 }, 00:03:29.217 { 00:03:29.217 "subsystem": "ublk", 00:03:29.217 "config": [] 00:03:29.217 }, 00:03:29.217 { 00:03:29.217 "subsystem": "nbd", 00:03:29.217 "config": [] 00:03:29.217 }, 00:03:29.217 { 00:03:29.217 "subsystem": "nvmf", 00:03:29.217 "config": [ 00:03:29.217 { 00:03:29.217 "method": "nvmf_set_config", 00:03:29.217 "params": { 00:03:29.217 "discovery_filter": "match_any", 00:03:29.217 "admin_cmd_passthru": { 00:03:29.217 "identify_ctrlr": false 00:03:29.217 }, 00:03:29.217 "dhchap_digests": [ 00:03:29.217 "sha256", 00:03:29.217 "sha384", 00:03:29.217 "sha512" 00:03:29.217 ], 00:03:29.217 "dhchap_dhgroups": [ 00:03:29.217 "null", 00:03:29.217 "ffdhe2048", 00:03:29.217 "ffdhe3072", 00:03:29.217 "ffdhe4096", 00:03:29.217 "ffdhe6144", 00:03:29.217 "ffdhe8192" 00:03:29.217 ] 00:03:29.217 } 00:03:29.217 }, 00:03:29.217 { 00:03:29.217 "method": "nvmf_set_max_subsystems", 00:03:29.217 "params": { 00:03:29.217 "max_subsystems": 1024 00:03:29.217 } 00:03:29.217 }, 00:03:29.217 { 00:03:29.217 "method": "nvmf_set_crdt", 00:03:29.217 "params": { 00:03:29.217 "crdt1": 0, 00:03:29.217 "crdt2": 0, 00:03:29.217 "crdt3": 0 00:03:29.217 } 00:03:29.217 }, 00:03:29.217 { 00:03:29.217 "method": "nvmf_create_transport", 00:03:29.217 "params": { 00:03:29.217 "trtype": "TCP", 00:03:29.217 "max_queue_depth": 128, 00:03:29.217 "max_io_qpairs_per_ctrlr": 127, 00:03:29.217 "in_capsule_data_size": 4096, 00:03:29.217 "max_io_size": 131072, 00:03:29.217 "io_unit_size": 131072, 00:03:29.217 "max_aq_depth": 128, 00:03:29.218 "num_shared_buffers": 511, 00:03:29.218 "buf_cache_size": 4294967295, 00:03:29.218 "dif_insert_or_strip": false, 00:03:29.218 "zcopy": false, 00:03:29.218 "c2h_success": true, 00:03:29.218 "sock_priority": 0, 00:03:29.218 "abort_timeout_sec": 1, 00:03:29.218 "ack_timeout": 0, 00:03:29.218 "data_wr_pool_size": 0 00:03:29.218 } 00:03:29.218 } 00:03:29.218 ] 00:03:29.218 }, 00:03:29.218 { 00:03:29.218 "subsystem": "iscsi", 00:03:29.218 "config": [ 00:03:29.218 { 00:03:29.218 "method": "iscsi_set_options", 00:03:29.218 "params": { 00:03:29.218 "node_base": "iqn.2016-06.io.spdk", 00:03:29.218 "max_sessions": 128, 00:03:29.218 "max_connections_per_session": 2, 00:03:29.218 "max_queue_depth": 64, 00:03:29.218 "default_time2wait": 2, 00:03:29.218 "default_time2retain": 20, 00:03:29.218 "first_burst_length": 8192, 00:03:29.218 "immediate_data": true, 00:03:29.218 "allow_duplicated_isid": false, 00:03:29.218 "error_recovery_level": 0, 00:03:29.218 "nop_timeout": 60, 00:03:29.218 "nop_in_interval": 30, 00:03:29.218 "disable_chap": false, 00:03:29.218 "require_chap": false, 00:03:29.218 "mutual_chap": false, 00:03:29.218 "chap_group": 0, 00:03:29.218 "max_large_datain_per_connection": 64, 00:03:29.218 "max_r2t_per_connection": 4, 00:03:29.218 "pdu_pool_size": 36864, 00:03:29.218 "immediate_data_pool_size": 16384, 00:03:29.218 "data_out_pool_size": 2048 00:03:29.218 } 00:03:29.218 } 00:03:29.218 ] 00:03:29.218 } 00:03:29.218 ] 00:03:29.218 } 00:03:29.218 08:47:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:03:29.218 08:47:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2142406 00:03:29.218 08:47:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2142406 ']' 00:03:29.218 08:47:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2142406 00:03:29.218 08:47:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:29.218 08:47:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:29.218 08:47:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2142406 00:03:29.218 08:47:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:29.218 08:47:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:29.477 08:47:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2142406' 00:03:29.477 killing process with pid 2142406 00:03:29.477 08:47:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2142406 00:03:29.477 08:47:45 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2142406 00:03:29.736 08:47:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2142429 00:03:29.736 08:47:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:29.736 08:47:45 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:03:35.014 08:47:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2142429 00:03:35.014 08:47:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 2142429 ']' 00:03:35.014 08:47:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 2142429 00:03:35.014 08:47:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:03:35.014 08:47:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:35.014 08:47:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2142429 00:03:35.014 08:47:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:35.014 08:47:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:35.015 08:47:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2142429' 00:03:35.015 killing process with pid 2142429 00:03:35.015 08:47:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 2142429 00:03:35.015 08:47:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 2142429 00:03:35.015 08:47:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:35.015 08:47:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:03:35.015 00:03:35.015 real 0m6.279s 00:03:35.015 user 0m5.984s 00:03:35.015 sys 0m0.599s 00:03:35.015 08:47:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:35.015 08:47:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:03:35.015 ************************************ 00:03:35.015 END TEST skip_rpc_with_json 00:03:35.015 ************************************ 00:03:35.015 08:47:50 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:03:35.015 08:47:50 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:35.015 08:47:50 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.015 08:47:50 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:35.015 ************************************ 00:03:35.015 START TEST skip_rpc_with_delay 00:03:35.015 ************************************ 00:03:35.015 08:47:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:03:35.015 08:47:50 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:35.015 08:47:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:03:35.015 08:47:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:35.015 08:47:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:35.015 08:47:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:35.015 08:47:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:35.015 08:47:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:35.015 08:47:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:35.015 08:47:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:35.015 08:47:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:35.015 08:47:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:35.015 08:47:50 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:03:35.015 [2024-11-20 08:47:51.038195] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:03:35.015 08:47:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:03:35.015 08:47:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:35.015 08:47:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:03:35.015 08:47:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:35.015 00:03:35.015 real 0m0.068s 00:03:35.015 user 0m0.045s 00:03:35.015 sys 0m0.023s 00:03:35.015 08:47:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:35.015 08:47:51 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:03:35.015 ************************************ 00:03:35.015 END TEST skip_rpc_with_delay 00:03:35.015 ************************************ 00:03:35.275 08:47:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:03:35.275 08:47:51 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:03:35.275 08:47:51 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:03:35.275 08:47:51 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:35.275 08:47:51 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:35.275 08:47:51 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:35.275 ************************************ 00:03:35.275 START TEST exit_on_failed_rpc_init 00:03:35.275 ************************************ 00:03:35.275 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:03:35.275 08:47:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2143428 00:03:35.275 08:47:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:03:35.275 08:47:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2143428 00:03:35.275 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 2143428 ']' 00:03:35.275 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:35.275 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:35.275 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:35.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:35.275 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:35.275 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:35.275 [2024-11-20 08:47:51.172439] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:03:35.275 [2024-11-20 08:47:51.172482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2143428 ] 00:03:35.275 [2024-11-20 08:47:51.248894] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:35.275 [2024-11-20 08:47:51.291773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:35.535 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:35.535 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:03:35.535 08:47:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:03:35.535 08:47:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:35.535 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:03:35.535 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:35.535 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:35.535 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:35.535 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:35.535 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:35.535 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:35.535 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:03:35.535 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:35.535 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:03:35.535 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:03:35.535 [2024-11-20 08:47:51.564895] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:03:35.535 [2024-11-20 08:47:51.564942] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2143630 ] 00:03:35.794 [2024-11-20 08:47:51.640434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:35.794 [2024-11-20 08:47:51.681840] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:35.794 [2024-11-20 08:47:51.681898] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:03:35.794 [2024-11-20 08:47:51.681907] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:03:35.794 [2024-11-20 08:47:51.681913] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:03:35.794 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:03:35.794 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:03:35.794 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:03:35.794 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:03:35.794 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:03:35.794 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:03:35.794 08:47:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:03:35.794 08:47:51 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2143428 00:03:35.794 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 2143428 ']' 00:03:35.794 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 2143428 00:03:35.794 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:03:35.794 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:35.794 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2143428 00:03:35.794 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:35.794 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:35.794 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2143428' 00:03:35.794 killing process with pid 2143428 00:03:35.794 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 2143428 00:03:35.794 08:47:51 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 2143428 00:03:36.054 00:03:36.054 real 0m0.950s 00:03:36.054 user 0m1.007s 00:03:36.054 sys 0m0.392s 00:03:36.054 08:47:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:36.054 08:47:52 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:03:36.054 ************************************ 00:03:36.054 END TEST exit_on_failed_rpc_init 00:03:36.054 ************************************ 00:03:36.313 08:47:52 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:03:36.313 00:03:36.313 real 0m13.135s 00:03:36.313 user 0m12.383s 00:03:36.313 sys 0m1.570s 00:03:36.313 08:47:52 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:36.313 08:47:52 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:36.313 ************************************ 00:03:36.313 END TEST skip_rpc 00:03:36.313 ************************************ 00:03:36.313 08:47:52 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:36.313 08:47:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:36.313 08:47:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:36.313 08:47:52 -- common/autotest_common.sh@10 -- # set +x 00:03:36.313 ************************************ 00:03:36.313 START TEST rpc_client 00:03:36.313 ************************************ 00:03:36.313 08:47:52 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:03:36.313 * Looking for test storage... 00:03:36.313 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:03:36.313 08:47:52 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:36.313 08:47:52 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:03:36.313 08:47:52 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:36.313 08:47:52 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:36.313 08:47:52 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:36.313 08:47:52 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:36.313 08:47:52 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:36.313 08:47:52 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:03:36.313 08:47:52 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:03:36.313 08:47:52 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:03:36.313 08:47:52 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:03:36.313 08:47:52 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:03:36.313 08:47:52 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:03:36.313 08:47:52 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:03:36.313 08:47:52 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:36.313 08:47:52 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:03:36.313 08:47:52 rpc_client -- scripts/common.sh@345 -- # : 1 00:03:36.313 08:47:52 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:36.313 08:47:52 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:36.313 08:47:52 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:03:36.313 08:47:52 rpc_client -- scripts/common.sh@353 -- # local d=1 00:03:36.313 08:47:52 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:36.313 08:47:52 rpc_client -- scripts/common.sh@355 -- # echo 1 00:03:36.313 08:47:52 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:03:36.313 08:47:52 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:03:36.313 08:47:52 rpc_client -- scripts/common.sh@353 -- # local d=2 00:03:36.313 08:47:52 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:36.313 08:47:52 rpc_client -- scripts/common.sh@355 -- # echo 2 00:03:36.313 08:47:52 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:03:36.313 08:47:52 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:36.313 08:47:52 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:36.313 08:47:52 rpc_client -- scripts/common.sh@368 -- # return 0 00:03:36.313 08:47:52 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:36.314 08:47:52 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:36.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.314 --rc genhtml_branch_coverage=1 00:03:36.314 --rc genhtml_function_coverage=1 00:03:36.314 --rc genhtml_legend=1 00:03:36.314 --rc geninfo_all_blocks=1 00:03:36.314 --rc geninfo_unexecuted_blocks=1 00:03:36.314 00:03:36.314 ' 00:03:36.314 08:47:52 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:36.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.314 --rc genhtml_branch_coverage=1 00:03:36.314 --rc genhtml_function_coverage=1 00:03:36.314 --rc genhtml_legend=1 00:03:36.314 --rc geninfo_all_blocks=1 00:03:36.314 --rc geninfo_unexecuted_blocks=1 00:03:36.314 00:03:36.314 ' 00:03:36.314 08:47:52 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:36.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.314 --rc genhtml_branch_coverage=1 00:03:36.314 --rc genhtml_function_coverage=1 00:03:36.314 --rc genhtml_legend=1 00:03:36.314 --rc geninfo_all_blocks=1 00:03:36.314 --rc geninfo_unexecuted_blocks=1 00:03:36.314 00:03:36.314 ' 00:03:36.314 08:47:52 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:36.314 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.314 --rc genhtml_branch_coverage=1 00:03:36.314 --rc genhtml_function_coverage=1 00:03:36.314 --rc genhtml_legend=1 00:03:36.314 --rc geninfo_all_blocks=1 00:03:36.314 --rc geninfo_unexecuted_blocks=1 00:03:36.314 00:03:36.314 ' 00:03:36.314 08:47:52 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:03:36.573 OK 00:03:36.573 08:47:52 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:03:36.573 00:03:36.573 real 0m0.194s 00:03:36.573 user 0m0.116s 00:03:36.573 sys 0m0.091s 00:03:36.573 08:47:52 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:36.573 08:47:52 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:03:36.573 ************************************ 00:03:36.573 END TEST rpc_client 00:03:36.573 ************************************ 00:03:36.573 08:47:52 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:36.573 08:47:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:36.573 08:47:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:36.573 08:47:52 -- common/autotest_common.sh@10 -- # set +x 00:03:36.573 ************************************ 00:03:36.573 START TEST json_config 00:03:36.573 ************************************ 00:03:36.573 08:47:52 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:03:36.573 08:47:52 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:36.573 08:47:52 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:03:36.573 08:47:52 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:36.573 08:47:52 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:36.573 08:47:52 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:36.573 08:47:52 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:36.573 08:47:52 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:36.573 08:47:52 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:03:36.573 08:47:52 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:03:36.573 08:47:52 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:03:36.573 08:47:52 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:03:36.573 08:47:52 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:03:36.573 08:47:52 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:03:36.573 08:47:52 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:03:36.573 08:47:52 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:36.573 08:47:52 json_config -- scripts/common.sh@344 -- # case "$op" in 00:03:36.573 08:47:52 json_config -- scripts/common.sh@345 -- # : 1 00:03:36.573 08:47:52 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:36.573 08:47:52 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:36.573 08:47:52 json_config -- scripts/common.sh@365 -- # decimal 1 00:03:36.573 08:47:52 json_config -- scripts/common.sh@353 -- # local d=1 00:03:36.573 08:47:52 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:36.573 08:47:52 json_config -- scripts/common.sh@355 -- # echo 1 00:03:36.573 08:47:52 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:03:36.573 08:47:52 json_config -- scripts/common.sh@366 -- # decimal 2 00:03:36.573 08:47:52 json_config -- scripts/common.sh@353 -- # local d=2 00:03:36.573 08:47:52 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:36.573 08:47:52 json_config -- scripts/common.sh@355 -- # echo 2 00:03:36.573 08:47:52 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:03:36.573 08:47:52 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:36.573 08:47:52 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:36.573 08:47:52 json_config -- scripts/common.sh@368 -- # return 0 00:03:36.573 08:47:52 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:36.573 08:47:52 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:36.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.573 --rc genhtml_branch_coverage=1 00:03:36.573 --rc genhtml_function_coverage=1 00:03:36.573 --rc genhtml_legend=1 00:03:36.573 --rc geninfo_all_blocks=1 00:03:36.573 --rc geninfo_unexecuted_blocks=1 00:03:36.573 00:03:36.573 ' 00:03:36.573 08:47:52 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:36.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.573 --rc genhtml_branch_coverage=1 00:03:36.573 --rc genhtml_function_coverage=1 00:03:36.573 --rc genhtml_legend=1 00:03:36.573 --rc geninfo_all_blocks=1 00:03:36.573 --rc geninfo_unexecuted_blocks=1 00:03:36.573 00:03:36.573 ' 00:03:36.573 08:47:52 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:36.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.573 --rc genhtml_branch_coverage=1 00:03:36.573 --rc genhtml_function_coverage=1 00:03:36.573 --rc genhtml_legend=1 00:03:36.573 --rc geninfo_all_blocks=1 00:03:36.573 --rc geninfo_unexecuted_blocks=1 00:03:36.573 00:03:36.573 ' 00:03:36.573 08:47:52 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:36.573 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:36.573 --rc genhtml_branch_coverage=1 00:03:36.573 --rc genhtml_function_coverage=1 00:03:36.573 --rc genhtml_legend=1 00:03:36.573 --rc geninfo_all_blocks=1 00:03:36.573 --rc geninfo_unexecuted_blocks=1 00:03:36.573 00:03:36.573 ' 00:03:36.573 08:47:52 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:36.573 08:47:52 json_config -- nvmf/common.sh@7 -- # uname -s 00:03:36.573 08:47:52 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:36.573 08:47:52 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:36.573 08:47:52 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:36.573 08:47:52 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:36.573 08:47:52 json_config -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:36.573 08:47:52 json_config -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:03:36.573 08:47:52 json_config -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:36.573 08:47:52 json_config -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:03:36.836 08:47:52 json_config -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:36.836 08:47:52 json_config -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:36.836 08:47:52 json_config -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:36.836 08:47:52 json_config -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:03:36.836 08:47:52 json_config -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:03:36.836 08:47:52 json_config -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:36.836 08:47:52 json_config -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:36.836 08:47:52 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:03:36.836 08:47:52 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:36.836 08:47:52 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:36.836 08:47:52 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:36.836 08:47:52 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:36.836 08:47:52 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:36.836 08:47:52 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:36.836 08:47:52 json_config -- paths/export.sh@5 -- # export PATH 00:03:36.836 08:47:52 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:36.836 08:47:52 json_config -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:03:36.836 08:47:52 json_config -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:03:36.836 08:47:52 json_config -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:03:36.836 08:47:52 json_config -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:03:36.836 08:47:52 json_config -- nvmf/common.sh@50 -- # : 0 00:03:36.836 08:47:52 json_config -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:03:36.836 08:47:52 json_config -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:03:36.836 08:47:52 json_config -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:03:36.836 08:47:52 json_config -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:36.836 08:47:52 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:36.836 08:47:52 json_config -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:03:36.836 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:03:36.836 08:47:52 json_config -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:03:36.836 08:47:52 json_config -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:03:36.836 08:47:52 json_config -- nvmf/common.sh@54 -- # have_pci_nics=0 00:03:36.836 08:47:52 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:36.836 08:47:52 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:03:36.836 08:47:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:03:36.836 08:47:52 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:03:36.836 08:47:52 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:03:36.836 08:47:52 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:03:36.836 08:47:52 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:03:36.836 08:47:52 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:03:36.836 08:47:52 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:03:36.836 08:47:52 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:03:36.836 08:47:52 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:03:36.836 08:47:52 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:03:36.836 08:47:52 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:03:36.836 08:47:52 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:03:36.836 08:47:52 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:36.836 08:47:52 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:03:36.836 INFO: JSON configuration test init 00:03:36.836 08:47:52 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:03:36.836 08:47:52 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:03:36.836 08:47:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:36.836 08:47:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:36.836 08:47:52 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:03:36.836 08:47:52 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:36.836 08:47:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:36.836 08:47:52 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:03:36.836 08:47:52 json_config -- json_config/common.sh@9 -- # local app=target 00:03:36.836 08:47:52 json_config -- json_config/common.sh@10 -- # shift 00:03:36.836 08:47:52 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:36.836 08:47:52 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:36.836 08:47:52 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:36.836 08:47:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:36.836 08:47:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:36.836 08:47:52 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2143884 00:03:36.836 08:47:52 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:36.836 Waiting for target to run... 00:03:36.836 08:47:52 json_config -- json_config/common.sh@25 -- # waitforlisten 2143884 /var/tmp/spdk_tgt.sock 00:03:36.836 08:47:52 json_config -- common/autotest_common.sh@835 -- # '[' -z 2143884 ']' 00:03:36.836 08:47:52 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:03:36.836 08:47:52 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:36.836 08:47:52 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:36.836 08:47:52 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:36.836 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:36.836 08:47:52 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:36.836 08:47:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:36.836 [2024-11-20 08:47:52.697612] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:03:36.836 [2024-11-20 08:47:52.697663] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2143884 ] 00:03:37.131 [2024-11-20 08:47:52.989071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:37.131 [2024-11-20 08:47:53.022870] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:37.729 08:47:53 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:37.729 08:47:53 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:37.729 08:47:53 json_config -- json_config/common.sh@26 -- # echo '' 00:03:37.729 00:03:37.729 08:47:53 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:03:37.729 08:47:53 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:03:37.729 08:47:53 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:37.729 08:47:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:37.729 08:47:53 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:03:37.729 08:47:53 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:03:37.729 08:47:53 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:37.729 08:47:53 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:37.729 08:47:53 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:03:37.729 08:47:53 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:03:37.729 08:47:53 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:03:41.019 08:47:56 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:03:41.019 08:47:56 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:03:41.019 08:47:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:41.019 08:47:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:41.019 08:47:56 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:03:41.019 08:47:56 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:03:41.019 08:47:56 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:03:41.019 08:47:56 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:03:41.019 08:47:56 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:03:41.019 08:47:56 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:03:41.019 08:47:56 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:03:41.019 08:47:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:03:41.019 08:47:56 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:03:41.019 08:47:56 json_config -- json_config/json_config.sh@51 -- # local get_types 00:03:41.019 08:47:56 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:03:41.019 08:47:56 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:03:41.019 08:47:56 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:03:41.019 08:47:56 json_config -- json_config/json_config.sh@54 -- # sort 00:03:41.019 08:47:56 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:03:41.019 08:47:56 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:03:41.019 08:47:56 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:03:41.019 08:47:56 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:03:41.019 08:47:56 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:41.019 08:47:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:41.019 08:47:56 json_config -- json_config/json_config.sh@62 -- # return 0 00:03:41.019 08:47:56 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:03:41.019 08:47:56 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:03:41.019 08:47:56 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:03:41.019 08:47:56 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:03:41.019 08:47:56 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:03:41.019 08:47:56 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:03:41.019 08:47:56 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:41.019 08:47:56 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:41.019 08:47:56 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:03:41.019 08:47:56 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:03:41.019 08:47:56 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:03:41.019 08:47:56 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:41.019 08:47:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:03:41.278 MallocForNvmf0 00:03:41.278 08:47:57 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:41.278 08:47:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:03:41.278 MallocForNvmf1 00:03:41.537 08:47:57 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:03:41.537 08:47:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:03:41.537 [2024-11-20 08:47:57.495029] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:41.537 08:47:57 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:41.537 08:47:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:03:41.796 08:47:57 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:41.796 08:47:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:03:42.054 08:47:57 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:42.054 08:47:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:03:42.313 08:47:58 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:42.313 08:47:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:03:42.313 [2024-11-20 08:47:58.313561] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:42.313 08:47:58 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:03:42.313 08:47:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:42.313 08:47:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:42.572 08:47:58 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:03:42.572 08:47:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:42.572 08:47:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:42.572 08:47:58 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:03:42.572 08:47:58 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:42.572 08:47:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:03:42.572 MallocBdevForConfigChangeCheck 00:03:42.572 08:47:58 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:03:42.572 08:47:58 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:42.572 08:47:58 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:42.831 08:47:58 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:03:42.831 08:47:58 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:43.090 08:47:58 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:03:43.090 INFO: shutting down applications... 00:03:43.090 08:47:58 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:03:43.091 08:47:58 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:03:43.091 08:47:58 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:03:43.091 08:47:58 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:03:45.006 Calling clear_iscsi_subsystem 00:03:45.006 Calling clear_nvmf_subsystem 00:03:45.006 Calling clear_nbd_subsystem 00:03:45.006 Calling clear_ublk_subsystem 00:03:45.006 Calling clear_vhost_blk_subsystem 00:03:45.006 Calling clear_vhost_scsi_subsystem 00:03:45.006 Calling clear_bdev_subsystem 00:03:45.006 08:48:00 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:03:45.006 08:48:00 json_config -- json_config/json_config.sh@350 -- # count=100 00:03:45.006 08:48:00 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:03:45.006 08:48:00 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:45.006 08:48:00 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:03:45.006 08:48:00 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:03:45.006 08:48:00 json_config -- json_config/json_config.sh@352 -- # break 00:03:45.006 08:48:00 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:03:45.006 08:48:00 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:03:45.006 08:48:00 json_config -- json_config/common.sh@31 -- # local app=target 00:03:45.006 08:48:00 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:45.006 08:48:00 json_config -- json_config/common.sh@35 -- # [[ -n 2143884 ]] 00:03:45.006 08:48:00 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2143884 00:03:45.006 08:48:00 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:45.006 08:48:00 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:45.006 08:48:00 json_config -- json_config/common.sh@41 -- # kill -0 2143884 00:03:45.006 08:48:00 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:03:45.575 08:48:01 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:03:45.575 08:48:01 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:45.575 08:48:01 json_config -- json_config/common.sh@41 -- # kill -0 2143884 00:03:45.575 08:48:01 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:45.575 08:48:01 json_config -- json_config/common.sh@43 -- # break 00:03:45.575 08:48:01 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:45.575 08:48:01 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:45.575 SPDK target shutdown done 00:03:45.575 08:48:01 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:03:45.575 INFO: relaunching applications... 00:03:45.575 08:48:01 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:45.575 08:48:01 json_config -- json_config/common.sh@9 -- # local app=target 00:03:45.575 08:48:01 json_config -- json_config/common.sh@10 -- # shift 00:03:45.575 08:48:01 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:45.575 08:48:01 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:45.575 08:48:01 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:03:45.575 08:48:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:45.575 08:48:01 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:45.575 08:48:01 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2145594 00:03:45.575 08:48:01 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:45.575 Waiting for target to run... 00:03:45.575 08:48:01 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:45.575 08:48:01 json_config -- json_config/common.sh@25 -- # waitforlisten 2145594 /var/tmp/spdk_tgt.sock 00:03:45.575 08:48:01 json_config -- common/autotest_common.sh@835 -- # '[' -z 2145594 ']' 00:03:45.575 08:48:01 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:45.575 08:48:01 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:45.575 08:48:01 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:45.575 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:45.575 08:48:01 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:45.575 08:48:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:45.575 [2024-11-20 08:48:01.519841] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:03:45.575 [2024-11-20 08:48:01.519904] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2145594 ] 00:03:46.143 [2024-11-20 08:48:01.976299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:46.143 [2024-11-20 08:48:02.028872] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:49.430 [2024-11-20 08:48:05.061005] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:03:49.431 [2024-11-20 08:48:05.093372] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:03:49.998 08:48:05 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:49.998 08:48:05 json_config -- common/autotest_common.sh@868 -- # return 0 00:03:49.998 08:48:05 json_config -- json_config/common.sh@26 -- # echo '' 00:03:49.998 00:03:49.998 08:48:05 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:03:49.998 08:48:05 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:03:49.998 INFO: Checking if target configuration is the same... 00:03:49.998 08:48:05 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:49.998 08:48:05 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:03:49.998 08:48:05 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:49.998 + '[' 2 -ne 2 ']' 00:03:49.998 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:49.998 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:49.998 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:49.998 +++ basename /dev/fd/62 00:03:49.998 ++ mktemp /tmp/62.XXX 00:03:49.998 + tmp_file_1=/tmp/62.ClA 00:03:49.998 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:49.998 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:49.998 + tmp_file_2=/tmp/spdk_tgt_config.json.IdS 00:03:49.998 + ret=0 00:03:49.998 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:50.257 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:50.257 + diff -u /tmp/62.ClA /tmp/spdk_tgt_config.json.IdS 00:03:50.257 + echo 'INFO: JSON config files are the same' 00:03:50.257 INFO: JSON config files are the same 00:03:50.257 + rm /tmp/62.ClA /tmp/spdk_tgt_config.json.IdS 00:03:50.257 + exit 0 00:03:50.257 08:48:06 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:03:50.257 08:48:06 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:03:50.257 INFO: changing configuration and checking if this can be detected... 00:03:50.257 08:48:06 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:50.257 08:48:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:03:50.516 08:48:06 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:50.516 08:48:06 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:03:50.516 08:48:06 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:03:50.516 + '[' 2 -ne 2 ']' 00:03:50.516 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:03:50.516 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:03:50.516 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:50.516 +++ basename /dev/fd/62 00:03:50.516 ++ mktemp /tmp/62.XXX 00:03:50.516 + tmp_file_1=/tmp/62.YOj 00:03:50.516 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:50.516 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:03:50.516 + tmp_file_2=/tmp/spdk_tgt_config.json.mzv 00:03:50.516 + ret=0 00:03:50.516 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:50.776 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:03:50.776 + diff -u /tmp/62.YOj /tmp/spdk_tgt_config.json.mzv 00:03:50.776 + ret=1 00:03:50.776 + echo '=== Start of file: /tmp/62.YOj ===' 00:03:50.776 + cat /tmp/62.YOj 00:03:50.776 + echo '=== End of file: /tmp/62.YOj ===' 00:03:50.776 + echo '' 00:03:50.776 + echo '=== Start of file: /tmp/spdk_tgt_config.json.mzv ===' 00:03:50.776 + cat /tmp/spdk_tgt_config.json.mzv 00:03:50.776 + echo '=== End of file: /tmp/spdk_tgt_config.json.mzv ===' 00:03:50.776 + echo '' 00:03:50.776 + rm /tmp/62.YOj /tmp/spdk_tgt_config.json.mzv 00:03:50.776 + exit 1 00:03:50.776 08:48:06 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:03:50.776 INFO: configuration change detected. 00:03:50.776 08:48:06 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:03:50.776 08:48:06 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:03:50.776 08:48:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:50.776 08:48:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:50.776 08:48:06 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:03:50.776 08:48:06 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:03:50.776 08:48:06 json_config -- json_config/json_config.sh@324 -- # [[ -n 2145594 ]] 00:03:50.776 08:48:06 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:03:50.776 08:48:06 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:03:50.776 08:48:06 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:50.776 08:48:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:50.776 08:48:06 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:03:50.776 08:48:06 json_config -- json_config/json_config.sh@200 -- # uname -s 00:03:50.776 08:48:06 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:03:50.776 08:48:06 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:03:50.776 08:48:06 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:03:50.776 08:48:06 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:03:50.776 08:48:06 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:50.776 08:48:06 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:51.035 08:48:06 json_config -- json_config/json_config.sh@330 -- # killprocess 2145594 00:03:51.035 08:48:06 json_config -- common/autotest_common.sh@954 -- # '[' -z 2145594 ']' 00:03:51.035 08:48:06 json_config -- common/autotest_common.sh@958 -- # kill -0 2145594 00:03:51.035 08:48:06 json_config -- common/autotest_common.sh@959 -- # uname 00:03:51.035 08:48:06 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:51.035 08:48:06 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2145594 00:03:51.035 08:48:06 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:51.035 08:48:06 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:51.035 08:48:06 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2145594' 00:03:51.035 killing process with pid 2145594 00:03:51.035 08:48:06 json_config -- common/autotest_common.sh@973 -- # kill 2145594 00:03:51.035 08:48:06 json_config -- common/autotest_common.sh@978 -- # wait 2145594 00:03:52.413 08:48:08 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:03:52.413 08:48:08 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:03:52.413 08:48:08 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:52.413 08:48:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:52.413 08:48:08 json_config -- json_config/json_config.sh@335 -- # return 0 00:03:52.413 08:48:08 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:03:52.413 INFO: Success 00:03:52.413 00:03:52.413 real 0m15.932s 00:03:52.413 user 0m16.630s 00:03:52.413 sys 0m2.591s 00:03:52.413 08:48:08 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:52.413 08:48:08 json_config -- common/autotest_common.sh@10 -- # set +x 00:03:52.413 ************************************ 00:03:52.413 END TEST json_config 00:03:52.413 ************************************ 00:03:52.413 08:48:08 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:52.413 08:48:08 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:52.413 08:48:08 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:52.413 08:48:08 -- common/autotest_common.sh@10 -- # set +x 00:03:52.413 ************************************ 00:03:52.413 START TEST json_config_extra_key 00:03:52.413 ************************************ 00:03:52.413 08:48:08 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:03:52.674 08:48:08 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:52.674 08:48:08 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:03:52.674 08:48:08 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:52.674 08:48:08 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:52.674 08:48:08 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:52.674 08:48:08 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:52.674 08:48:08 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:52.674 08:48:08 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:03:52.674 08:48:08 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:03:52.674 08:48:08 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:03:52.674 08:48:08 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:03:52.674 08:48:08 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:03:52.674 08:48:08 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:03:52.674 08:48:08 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:03:52.674 08:48:08 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:52.674 08:48:08 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:03:52.674 08:48:08 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:03:52.674 08:48:08 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:52.674 08:48:08 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:52.674 08:48:08 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:03:52.674 08:48:08 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:03:52.674 08:48:08 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:52.674 08:48:08 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:03:52.674 08:48:08 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:03:52.674 08:48:08 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:03:52.674 08:48:08 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:03:52.674 08:48:08 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:52.674 08:48:08 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:03:52.674 08:48:08 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:03:52.674 08:48:08 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:52.674 08:48:08 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:52.674 08:48:08 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:03:52.674 08:48:08 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:52.674 08:48:08 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:52.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.674 --rc genhtml_branch_coverage=1 00:03:52.674 --rc genhtml_function_coverage=1 00:03:52.674 --rc genhtml_legend=1 00:03:52.674 --rc geninfo_all_blocks=1 00:03:52.674 --rc geninfo_unexecuted_blocks=1 00:03:52.674 00:03:52.674 ' 00:03:52.674 08:48:08 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:52.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.674 --rc genhtml_branch_coverage=1 00:03:52.674 --rc genhtml_function_coverage=1 00:03:52.674 --rc genhtml_legend=1 00:03:52.674 --rc geninfo_all_blocks=1 00:03:52.674 --rc geninfo_unexecuted_blocks=1 00:03:52.674 00:03:52.674 ' 00:03:52.674 08:48:08 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:52.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.674 --rc genhtml_branch_coverage=1 00:03:52.674 --rc genhtml_function_coverage=1 00:03:52.674 --rc genhtml_legend=1 00:03:52.674 --rc geninfo_all_blocks=1 00:03:52.674 --rc geninfo_unexecuted_blocks=1 00:03:52.674 00:03:52.674 ' 00:03:52.674 08:48:08 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:52.674 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:52.674 --rc genhtml_branch_coverage=1 00:03:52.674 --rc genhtml_function_coverage=1 00:03:52.674 --rc genhtml_legend=1 00:03:52.674 --rc geninfo_all_blocks=1 00:03:52.674 --rc geninfo_unexecuted_blocks=1 00:03:52.674 00:03:52.674 ' 00:03:52.674 08:48:08 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:52.674 08:48:08 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:03:52.674 08:48:08 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:52.674 08:48:08 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:52.674 08:48:08 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:52.674 08:48:08 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:52.674 08:48:08 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:52.674 08:48:08 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:03:52.674 08:48:08 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:52.674 08:48:08 json_config_extra_key -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:03:52.675 08:48:08 json_config_extra_key -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:03:52.675 08:48:08 json_config_extra_key -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:03:52.675 08:48:08 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:52.675 08:48:08 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:03:52.675 08:48:08 json_config_extra_key -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:03:52.675 08:48:08 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:52.675 08:48:08 json_config_extra_key -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:52.675 08:48:08 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:03:52.675 08:48:08 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:52.675 08:48:08 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:52.675 08:48:08 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:52.675 08:48:08 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.675 08:48:08 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.675 08:48:08 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.675 08:48:08 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:03:52.675 08:48:08 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:52.675 08:48:08 json_config_extra_key -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:03:52.675 08:48:08 json_config_extra_key -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:03:52.675 08:48:08 json_config_extra_key -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:03:52.675 08:48:08 json_config_extra_key -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:03:52.675 08:48:08 json_config_extra_key -- nvmf/common.sh@50 -- # : 0 00:03:52.675 08:48:08 json_config_extra_key -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:03:52.675 08:48:08 json_config_extra_key -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:03:52.675 08:48:08 json_config_extra_key -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:03:52.675 08:48:08 json_config_extra_key -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:52.675 08:48:08 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:52.675 08:48:08 json_config_extra_key -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:03:52.675 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:03:52.675 08:48:08 json_config_extra_key -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:03:52.675 08:48:08 json_config_extra_key -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:03:52.675 08:48:08 json_config_extra_key -- nvmf/common.sh@54 -- # have_pci_nics=0 00:03:52.675 08:48:08 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:03:52.675 08:48:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:03:52.675 08:48:08 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:03:52.675 08:48:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:03:52.675 08:48:08 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:03:52.675 08:48:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:03:52.675 08:48:08 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:03:52.675 08:48:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:03:52.675 08:48:08 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:03:52.675 08:48:08 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:03:52.675 08:48:08 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:03:52.675 INFO: launching applications... 00:03:52.675 08:48:08 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:52.675 08:48:08 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:03:52.675 08:48:08 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:03:52.675 08:48:08 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:03:52.675 08:48:08 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:03:52.675 08:48:08 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:03:52.675 08:48:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:52.675 08:48:08 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:03:52.675 08:48:08 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2147119 00:03:52.675 08:48:08 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:03:52.675 Waiting for target to run... 00:03:52.675 08:48:08 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2147119 /var/tmp/spdk_tgt.sock 00:03:52.675 08:48:08 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 2147119 ']' 00:03:52.675 08:48:08 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:03:52.675 08:48:08 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:03:52.675 08:48:08 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:52.675 08:48:08 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:03:52.675 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:03:52.675 08:48:08 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:52.675 08:48:08 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:52.675 [2024-11-20 08:48:08.684116] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:03:52.675 [2024-11-20 08:48:08.684168] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2147119 ] 00:03:53.245 [2024-11-20 08:48:08.981803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:53.245 [2024-11-20 08:48:09.021879] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:53.504 08:48:09 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:53.504 08:48:09 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:03:53.504 08:48:09 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:03:53.504 00:03:53.504 08:48:09 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:03:53.504 INFO: shutting down applications... 00:03:53.504 08:48:09 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:03:53.504 08:48:09 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:03:53.504 08:48:09 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:03:53.504 08:48:09 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2147119 ]] 00:03:53.504 08:48:09 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2147119 00:03:53.504 08:48:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:03:53.504 08:48:09 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:53.504 08:48:09 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2147119 00:03:53.504 08:48:09 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:03:54.072 08:48:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:03:54.072 08:48:10 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:03:54.072 08:48:10 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2147119 00:03:54.072 08:48:10 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:03:54.072 08:48:10 json_config_extra_key -- json_config/common.sh@43 -- # break 00:03:54.072 08:48:10 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:03:54.072 08:48:10 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:03:54.072 SPDK target shutdown done 00:03:54.072 08:48:10 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:03:54.072 Success 00:03:54.072 00:03:54.072 real 0m1.581s 00:03:54.072 user 0m1.375s 00:03:54.072 sys 0m0.381s 00:03:54.072 08:48:10 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:54.072 08:48:10 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:03:54.072 ************************************ 00:03:54.072 END TEST json_config_extra_key 00:03:54.072 ************************************ 00:03:54.072 08:48:10 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:54.072 08:48:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:54.072 08:48:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:54.072 08:48:10 -- common/autotest_common.sh@10 -- # set +x 00:03:54.072 ************************************ 00:03:54.072 START TEST alias_rpc 00:03:54.072 ************************************ 00:03:54.072 08:48:10 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:03:54.332 * Looking for test storage... 00:03:54.332 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:03:54.332 08:48:10 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:54.332 08:48:10 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:03:54.332 08:48:10 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:54.332 08:48:10 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:54.332 08:48:10 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:54.332 08:48:10 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:54.332 08:48:10 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:54.332 08:48:10 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:03:54.332 08:48:10 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:03:54.332 08:48:10 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:03:54.332 08:48:10 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:03:54.332 08:48:10 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:03:54.332 08:48:10 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:03:54.332 08:48:10 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:03:54.332 08:48:10 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:54.332 08:48:10 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:03:54.332 08:48:10 alias_rpc -- scripts/common.sh@345 -- # : 1 00:03:54.332 08:48:10 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:54.332 08:48:10 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:54.332 08:48:10 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:03:54.332 08:48:10 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:03:54.332 08:48:10 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:54.332 08:48:10 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:03:54.332 08:48:10 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:03:54.332 08:48:10 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:03:54.332 08:48:10 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:03:54.333 08:48:10 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:54.333 08:48:10 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:03:54.333 08:48:10 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:03:54.333 08:48:10 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:54.333 08:48:10 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:54.333 08:48:10 alias_rpc -- scripts/common.sh@368 -- # return 0 00:03:54.333 08:48:10 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:54.333 08:48:10 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:54.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.333 --rc genhtml_branch_coverage=1 00:03:54.333 --rc genhtml_function_coverage=1 00:03:54.333 --rc genhtml_legend=1 00:03:54.333 --rc geninfo_all_blocks=1 00:03:54.333 --rc geninfo_unexecuted_blocks=1 00:03:54.333 00:03:54.333 ' 00:03:54.333 08:48:10 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:54.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.333 --rc genhtml_branch_coverage=1 00:03:54.333 --rc genhtml_function_coverage=1 00:03:54.333 --rc genhtml_legend=1 00:03:54.333 --rc geninfo_all_blocks=1 00:03:54.333 --rc geninfo_unexecuted_blocks=1 00:03:54.333 00:03:54.333 ' 00:03:54.333 08:48:10 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:54.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.333 --rc genhtml_branch_coverage=1 00:03:54.333 --rc genhtml_function_coverage=1 00:03:54.333 --rc genhtml_legend=1 00:03:54.333 --rc geninfo_all_blocks=1 00:03:54.333 --rc geninfo_unexecuted_blocks=1 00:03:54.333 00:03:54.333 ' 00:03:54.333 08:48:10 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:54.333 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:54.333 --rc genhtml_branch_coverage=1 00:03:54.333 --rc genhtml_function_coverage=1 00:03:54.333 --rc genhtml_legend=1 00:03:54.333 --rc geninfo_all_blocks=1 00:03:54.333 --rc geninfo_unexecuted_blocks=1 00:03:54.333 00:03:54.333 ' 00:03:54.333 08:48:10 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:03:54.333 08:48:10 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2147595 00:03:54.333 08:48:10 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2147595 00:03:54.333 08:48:10 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:54.333 08:48:10 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 2147595 ']' 00:03:54.333 08:48:10 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:54.333 08:48:10 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:54.333 08:48:10 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:54.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:54.333 08:48:10 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:54.333 08:48:10 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:54.333 [2024-11-20 08:48:10.314421] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:03:54.333 [2024-11-20 08:48:10.314471] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2147595 ] 00:03:54.592 [2024-11-20 08:48:10.390878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:54.592 [2024-11-20 08:48:10.433888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.851 08:48:10 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:54.851 08:48:10 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:03:54.851 08:48:10 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:03:54.851 08:48:10 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2147595 00:03:54.851 08:48:10 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 2147595 ']' 00:03:54.851 08:48:10 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 2147595 00:03:54.851 08:48:10 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:03:54.851 08:48:10 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:54.851 08:48:10 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2147595 00:03:55.110 08:48:10 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:55.110 08:48:10 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:55.110 08:48:10 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2147595' 00:03:55.110 killing process with pid 2147595 00:03:55.110 08:48:10 alias_rpc -- common/autotest_common.sh@973 -- # kill 2147595 00:03:55.110 08:48:10 alias_rpc -- common/autotest_common.sh@978 -- # wait 2147595 00:03:55.370 00:03:55.370 real 0m1.143s 00:03:55.370 user 0m1.167s 00:03:55.370 sys 0m0.398s 00:03:55.370 08:48:11 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:55.370 08:48:11 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:03:55.370 ************************************ 00:03:55.370 END TEST alias_rpc 00:03:55.370 ************************************ 00:03:55.370 08:48:11 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:03:55.370 08:48:11 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:55.370 08:48:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:55.370 08:48:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:55.370 08:48:11 -- common/autotest_common.sh@10 -- # set +x 00:03:55.370 ************************************ 00:03:55.370 START TEST spdkcli_tcp 00:03:55.370 ************************************ 00:03:55.370 08:48:11 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:03:55.370 * Looking for test storage... 00:03:55.370 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:03:55.370 08:48:11 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:55.370 08:48:11 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:03:55.370 08:48:11 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:55.630 08:48:11 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:55.630 08:48:11 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:55.630 08:48:11 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:55.630 08:48:11 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:55.630 08:48:11 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:03:55.631 08:48:11 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:03:55.631 08:48:11 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:03:55.631 08:48:11 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:03:55.631 08:48:11 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:03:55.631 08:48:11 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:03:55.631 08:48:11 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:03:55.631 08:48:11 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:55.631 08:48:11 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:03:55.631 08:48:11 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:03:55.631 08:48:11 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:55.631 08:48:11 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:55.631 08:48:11 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:03:55.631 08:48:11 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:03:55.631 08:48:11 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:55.631 08:48:11 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:03:55.631 08:48:11 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:03:55.631 08:48:11 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:03:55.631 08:48:11 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:03:55.631 08:48:11 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:55.631 08:48:11 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:03:55.631 08:48:11 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:03:55.631 08:48:11 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:55.631 08:48:11 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:55.631 08:48:11 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:03:55.631 08:48:11 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:55.631 08:48:11 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:55.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.631 --rc genhtml_branch_coverage=1 00:03:55.631 --rc genhtml_function_coverage=1 00:03:55.631 --rc genhtml_legend=1 00:03:55.631 --rc geninfo_all_blocks=1 00:03:55.631 --rc geninfo_unexecuted_blocks=1 00:03:55.631 00:03:55.631 ' 00:03:55.631 08:48:11 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:55.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.631 --rc genhtml_branch_coverage=1 00:03:55.631 --rc genhtml_function_coverage=1 00:03:55.631 --rc genhtml_legend=1 00:03:55.631 --rc geninfo_all_blocks=1 00:03:55.631 --rc geninfo_unexecuted_blocks=1 00:03:55.631 00:03:55.631 ' 00:03:55.631 08:48:11 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:55.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.631 --rc genhtml_branch_coverage=1 00:03:55.631 --rc genhtml_function_coverage=1 00:03:55.631 --rc genhtml_legend=1 00:03:55.631 --rc geninfo_all_blocks=1 00:03:55.631 --rc geninfo_unexecuted_blocks=1 00:03:55.631 00:03:55.631 ' 00:03:55.631 08:48:11 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:55.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:55.631 --rc genhtml_branch_coverage=1 00:03:55.631 --rc genhtml_function_coverage=1 00:03:55.631 --rc genhtml_legend=1 00:03:55.631 --rc geninfo_all_blocks=1 00:03:55.631 --rc geninfo_unexecuted_blocks=1 00:03:55.631 00:03:55.631 ' 00:03:55.631 08:48:11 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:03:55.631 08:48:11 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:03:55.631 08:48:11 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:03:55.631 08:48:11 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:03:55.631 08:48:11 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:03:55.631 08:48:11 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:03:55.631 08:48:11 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:03:55.631 08:48:11 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:55.631 08:48:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:55.631 08:48:11 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2147878 00:03:55.631 08:48:11 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2147878 00:03:55.631 08:48:11 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:03:55.631 08:48:11 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 2147878 ']' 00:03:55.631 08:48:11 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:55.631 08:48:11 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:55.631 08:48:11 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:55.631 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:55.631 08:48:11 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:55.631 08:48:11 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:55.631 [2024-11-20 08:48:11.539214] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:03:55.631 [2024-11-20 08:48:11.539261] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2147878 ] 00:03:55.631 [2024-11-20 08:48:11.615513] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:03:55.631 [2024-11-20 08:48:11.656955] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:55.631 [2024-11-20 08:48:11.656961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:55.890 08:48:11 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:55.890 08:48:11 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:03:55.890 08:48:11 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2148029 00:03:55.890 08:48:11 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:03:55.890 08:48:11 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:03:56.149 [ 00:03:56.149 "bdev_malloc_delete", 00:03:56.149 "bdev_malloc_create", 00:03:56.149 "bdev_null_resize", 00:03:56.149 "bdev_null_delete", 00:03:56.149 "bdev_null_create", 00:03:56.149 "bdev_nvme_cuse_unregister", 00:03:56.149 "bdev_nvme_cuse_register", 00:03:56.149 "bdev_opal_new_user", 00:03:56.149 "bdev_opal_set_lock_state", 00:03:56.149 "bdev_opal_delete", 00:03:56.149 "bdev_opal_get_info", 00:03:56.149 "bdev_opal_create", 00:03:56.149 "bdev_nvme_opal_revert", 00:03:56.149 "bdev_nvme_opal_init", 00:03:56.149 "bdev_nvme_send_cmd", 00:03:56.149 "bdev_nvme_set_keys", 00:03:56.149 "bdev_nvme_get_path_iostat", 00:03:56.149 "bdev_nvme_get_mdns_discovery_info", 00:03:56.149 "bdev_nvme_stop_mdns_discovery", 00:03:56.149 "bdev_nvme_start_mdns_discovery", 00:03:56.149 "bdev_nvme_set_multipath_policy", 00:03:56.149 "bdev_nvme_set_preferred_path", 00:03:56.149 "bdev_nvme_get_io_paths", 00:03:56.149 "bdev_nvme_remove_error_injection", 00:03:56.149 "bdev_nvme_add_error_injection", 00:03:56.149 "bdev_nvme_get_discovery_info", 00:03:56.149 "bdev_nvme_stop_discovery", 00:03:56.149 "bdev_nvme_start_discovery", 00:03:56.149 "bdev_nvme_get_controller_health_info", 00:03:56.150 "bdev_nvme_disable_controller", 00:03:56.150 "bdev_nvme_enable_controller", 00:03:56.150 "bdev_nvme_reset_controller", 00:03:56.150 "bdev_nvme_get_transport_statistics", 00:03:56.150 "bdev_nvme_apply_firmware", 00:03:56.150 "bdev_nvme_detach_controller", 00:03:56.150 "bdev_nvme_get_controllers", 00:03:56.150 "bdev_nvme_attach_controller", 00:03:56.150 "bdev_nvme_set_hotplug", 00:03:56.150 "bdev_nvme_set_options", 00:03:56.150 "bdev_passthru_delete", 00:03:56.150 "bdev_passthru_create", 00:03:56.150 "bdev_lvol_set_parent_bdev", 00:03:56.150 "bdev_lvol_set_parent", 00:03:56.150 "bdev_lvol_check_shallow_copy", 00:03:56.150 "bdev_lvol_start_shallow_copy", 00:03:56.150 "bdev_lvol_grow_lvstore", 00:03:56.150 "bdev_lvol_get_lvols", 00:03:56.150 "bdev_lvol_get_lvstores", 00:03:56.150 "bdev_lvol_delete", 00:03:56.150 "bdev_lvol_set_read_only", 00:03:56.150 "bdev_lvol_resize", 00:03:56.150 "bdev_lvol_decouple_parent", 00:03:56.150 "bdev_lvol_inflate", 00:03:56.150 "bdev_lvol_rename", 00:03:56.150 "bdev_lvol_clone_bdev", 00:03:56.150 "bdev_lvol_clone", 00:03:56.150 "bdev_lvol_snapshot", 00:03:56.150 "bdev_lvol_create", 00:03:56.150 "bdev_lvol_delete_lvstore", 00:03:56.150 "bdev_lvol_rename_lvstore", 00:03:56.150 "bdev_lvol_create_lvstore", 00:03:56.150 "bdev_raid_set_options", 00:03:56.150 "bdev_raid_remove_base_bdev", 00:03:56.150 "bdev_raid_add_base_bdev", 00:03:56.150 "bdev_raid_delete", 00:03:56.150 "bdev_raid_create", 00:03:56.150 "bdev_raid_get_bdevs", 00:03:56.150 "bdev_error_inject_error", 00:03:56.150 "bdev_error_delete", 00:03:56.150 "bdev_error_create", 00:03:56.150 "bdev_split_delete", 00:03:56.150 "bdev_split_create", 00:03:56.150 "bdev_delay_delete", 00:03:56.150 "bdev_delay_create", 00:03:56.150 "bdev_delay_update_latency", 00:03:56.150 "bdev_zone_block_delete", 00:03:56.150 "bdev_zone_block_create", 00:03:56.150 "blobfs_create", 00:03:56.150 "blobfs_detect", 00:03:56.150 "blobfs_set_cache_size", 00:03:56.150 "bdev_aio_delete", 00:03:56.150 "bdev_aio_rescan", 00:03:56.150 "bdev_aio_create", 00:03:56.150 "bdev_ftl_set_property", 00:03:56.150 "bdev_ftl_get_properties", 00:03:56.150 "bdev_ftl_get_stats", 00:03:56.150 "bdev_ftl_unmap", 00:03:56.150 "bdev_ftl_unload", 00:03:56.150 "bdev_ftl_delete", 00:03:56.150 "bdev_ftl_load", 00:03:56.150 "bdev_ftl_create", 00:03:56.150 "bdev_virtio_attach_controller", 00:03:56.150 "bdev_virtio_scsi_get_devices", 00:03:56.150 "bdev_virtio_detach_controller", 00:03:56.150 "bdev_virtio_blk_set_hotplug", 00:03:56.150 "bdev_iscsi_delete", 00:03:56.150 "bdev_iscsi_create", 00:03:56.150 "bdev_iscsi_set_options", 00:03:56.150 "accel_error_inject_error", 00:03:56.150 "ioat_scan_accel_module", 00:03:56.150 "dsa_scan_accel_module", 00:03:56.150 "iaa_scan_accel_module", 00:03:56.150 "vfu_virtio_create_fs_endpoint", 00:03:56.150 "vfu_virtio_create_scsi_endpoint", 00:03:56.150 "vfu_virtio_scsi_remove_target", 00:03:56.150 "vfu_virtio_scsi_add_target", 00:03:56.150 "vfu_virtio_create_blk_endpoint", 00:03:56.150 "vfu_virtio_delete_endpoint", 00:03:56.150 "keyring_file_remove_key", 00:03:56.150 "keyring_file_add_key", 00:03:56.150 "keyring_linux_set_options", 00:03:56.150 "fsdev_aio_delete", 00:03:56.150 "fsdev_aio_create", 00:03:56.150 "iscsi_get_histogram", 00:03:56.150 "iscsi_enable_histogram", 00:03:56.150 "iscsi_set_options", 00:03:56.150 "iscsi_get_auth_groups", 00:03:56.150 "iscsi_auth_group_remove_secret", 00:03:56.150 "iscsi_auth_group_add_secret", 00:03:56.150 "iscsi_delete_auth_group", 00:03:56.150 "iscsi_create_auth_group", 00:03:56.150 "iscsi_set_discovery_auth", 00:03:56.150 "iscsi_get_options", 00:03:56.150 "iscsi_target_node_request_logout", 00:03:56.150 "iscsi_target_node_set_redirect", 00:03:56.150 "iscsi_target_node_set_auth", 00:03:56.150 "iscsi_target_node_add_lun", 00:03:56.150 "iscsi_get_stats", 00:03:56.150 "iscsi_get_connections", 00:03:56.150 "iscsi_portal_group_set_auth", 00:03:56.150 "iscsi_start_portal_group", 00:03:56.150 "iscsi_delete_portal_group", 00:03:56.150 "iscsi_create_portal_group", 00:03:56.150 "iscsi_get_portal_groups", 00:03:56.150 "iscsi_delete_target_node", 00:03:56.150 "iscsi_target_node_remove_pg_ig_maps", 00:03:56.150 "iscsi_target_node_add_pg_ig_maps", 00:03:56.150 "iscsi_create_target_node", 00:03:56.150 "iscsi_get_target_nodes", 00:03:56.150 "iscsi_delete_initiator_group", 00:03:56.150 "iscsi_initiator_group_remove_initiators", 00:03:56.150 "iscsi_initiator_group_add_initiators", 00:03:56.150 "iscsi_create_initiator_group", 00:03:56.150 "iscsi_get_initiator_groups", 00:03:56.150 "nvmf_set_crdt", 00:03:56.150 "nvmf_set_config", 00:03:56.150 "nvmf_set_max_subsystems", 00:03:56.150 "nvmf_stop_mdns_prr", 00:03:56.150 "nvmf_publish_mdns_prr", 00:03:56.150 "nvmf_subsystem_get_listeners", 00:03:56.150 "nvmf_subsystem_get_qpairs", 00:03:56.150 "nvmf_subsystem_get_controllers", 00:03:56.150 "nvmf_get_stats", 00:03:56.150 "nvmf_get_transports", 00:03:56.150 "nvmf_create_transport", 00:03:56.150 "nvmf_get_targets", 00:03:56.150 "nvmf_delete_target", 00:03:56.150 "nvmf_create_target", 00:03:56.150 "nvmf_subsystem_allow_any_host", 00:03:56.150 "nvmf_subsystem_set_keys", 00:03:56.150 "nvmf_subsystem_remove_host", 00:03:56.150 "nvmf_subsystem_add_host", 00:03:56.150 "nvmf_ns_remove_host", 00:03:56.150 "nvmf_ns_add_host", 00:03:56.150 "nvmf_subsystem_remove_ns", 00:03:56.150 "nvmf_subsystem_set_ns_ana_group", 00:03:56.150 "nvmf_subsystem_add_ns", 00:03:56.150 "nvmf_subsystem_listener_set_ana_state", 00:03:56.150 "nvmf_discovery_get_referrals", 00:03:56.150 "nvmf_discovery_remove_referral", 00:03:56.150 "nvmf_discovery_add_referral", 00:03:56.150 "nvmf_subsystem_remove_listener", 00:03:56.150 "nvmf_subsystem_add_listener", 00:03:56.150 "nvmf_delete_subsystem", 00:03:56.150 "nvmf_create_subsystem", 00:03:56.150 "nvmf_get_subsystems", 00:03:56.150 "env_dpdk_get_mem_stats", 00:03:56.150 "nbd_get_disks", 00:03:56.150 "nbd_stop_disk", 00:03:56.150 "nbd_start_disk", 00:03:56.150 "ublk_recover_disk", 00:03:56.150 "ublk_get_disks", 00:03:56.150 "ublk_stop_disk", 00:03:56.150 "ublk_start_disk", 00:03:56.150 "ublk_destroy_target", 00:03:56.150 "ublk_create_target", 00:03:56.150 "virtio_blk_create_transport", 00:03:56.150 "virtio_blk_get_transports", 00:03:56.150 "vhost_controller_set_coalescing", 00:03:56.150 "vhost_get_controllers", 00:03:56.150 "vhost_delete_controller", 00:03:56.150 "vhost_create_blk_controller", 00:03:56.150 "vhost_scsi_controller_remove_target", 00:03:56.150 "vhost_scsi_controller_add_target", 00:03:56.150 "vhost_start_scsi_controller", 00:03:56.150 "vhost_create_scsi_controller", 00:03:56.150 "thread_set_cpumask", 00:03:56.150 "scheduler_set_options", 00:03:56.150 "framework_get_governor", 00:03:56.150 "framework_get_scheduler", 00:03:56.150 "framework_set_scheduler", 00:03:56.150 "framework_get_reactors", 00:03:56.150 "thread_get_io_channels", 00:03:56.150 "thread_get_pollers", 00:03:56.150 "thread_get_stats", 00:03:56.150 "framework_monitor_context_switch", 00:03:56.150 "spdk_kill_instance", 00:03:56.150 "log_enable_timestamps", 00:03:56.150 "log_get_flags", 00:03:56.150 "log_clear_flag", 00:03:56.150 "log_set_flag", 00:03:56.150 "log_get_level", 00:03:56.150 "log_set_level", 00:03:56.150 "log_get_print_level", 00:03:56.150 "log_set_print_level", 00:03:56.150 "framework_enable_cpumask_locks", 00:03:56.150 "framework_disable_cpumask_locks", 00:03:56.150 "framework_wait_init", 00:03:56.150 "framework_start_init", 00:03:56.150 "scsi_get_devices", 00:03:56.150 "bdev_get_histogram", 00:03:56.150 "bdev_enable_histogram", 00:03:56.150 "bdev_set_qos_limit", 00:03:56.150 "bdev_set_qd_sampling_period", 00:03:56.150 "bdev_get_bdevs", 00:03:56.150 "bdev_reset_iostat", 00:03:56.150 "bdev_get_iostat", 00:03:56.150 "bdev_examine", 00:03:56.150 "bdev_wait_for_examine", 00:03:56.150 "bdev_set_options", 00:03:56.150 "accel_get_stats", 00:03:56.150 "accel_set_options", 00:03:56.150 "accel_set_driver", 00:03:56.150 "accel_crypto_key_destroy", 00:03:56.150 "accel_crypto_keys_get", 00:03:56.150 "accel_crypto_key_create", 00:03:56.150 "accel_assign_opc", 00:03:56.150 "accel_get_module_info", 00:03:56.150 "accel_get_opc_assignments", 00:03:56.150 "vmd_rescan", 00:03:56.151 "vmd_remove_device", 00:03:56.151 "vmd_enable", 00:03:56.151 "sock_get_default_impl", 00:03:56.151 "sock_set_default_impl", 00:03:56.151 "sock_impl_set_options", 00:03:56.151 "sock_impl_get_options", 00:03:56.151 "iobuf_get_stats", 00:03:56.151 "iobuf_set_options", 00:03:56.151 "keyring_get_keys", 00:03:56.151 "vfu_tgt_set_base_path", 00:03:56.151 "framework_get_pci_devices", 00:03:56.151 "framework_get_config", 00:03:56.151 "framework_get_subsystems", 00:03:56.151 "fsdev_set_opts", 00:03:56.151 "fsdev_get_opts", 00:03:56.151 "trace_get_info", 00:03:56.151 "trace_get_tpoint_group_mask", 00:03:56.151 "trace_disable_tpoint_group", 00:03:56.151 "trace_enable_tpoint_group", 00:03:56.151 "trace_clear_tpoint_mask", 00:03:56.151 "trace_set_tpoint_mask", 00:03:56.151 "notify_get_notifications", 00:03:56.151 "notify_get_types", 00:03:56.151 "spdk_get_version", 00:03:56.151 "rpc_get_methods" 00:03:56.151 ] 00:03:56.151 08:48:12 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:03:56.151 08:48:12 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:56.151 08:48:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:56.151 08:48:12 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:03:56.151 08:48:12 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2147878 00:03:56.151 08:48:12 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 2147878 ']' 00:03:56.151 08:48:12 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 2147878 00:03:56.151 08:48:12 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:03:56.151 08:48:12 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:56.151 08:48:12 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2147878 00:03:56.151 08:48:12 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:56.151 08:48:12 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:56.151 08:48:12 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2147878' 00:03:56.151 killing process with pid 2147878 00:03:56.151 08:48:12 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 2147878 00:03:56.151 08:48:12 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 2147878 00:03:56.720 00:03:56.720 real 0m1.177s 00:03:56.720 user 0m1.989s 00:03:56.720 sys 0m0.451s 00:03:56.720 08:48:12 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:56.720 08:48:12 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:03:56.720 ************************************ 00:03:56.720 END TEST spdkcli_tcp 00:03:56.720 ************************************ 00:03:56.720 08:48:12 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:56.720 08:48:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:56.720 08:48:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:56.720 08:48:12 -- common/autotest_common.sh@10 -- # set +x 00:03:56.720 ************************************ 00:03:56.720 START TEST dpdk_mem_utility 00:03:56.720 ************************************ 00:03:56.720 08:48:12 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:03:56.720 * Looking for test storage... 00:03:56.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:03:56.720 08:48:12 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:56.720 08:48:12 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:03:56.720 08:48:12 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:56.720 08:48:12 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:56.720 08:48:12 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:56.720 08:48:12 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:56.720 08:48:12 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:56.720 08:48:12 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:03:56.720 08:48:12 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:03:56.720 08:48:12 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:03:56.720 08:48:12 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:03:56.720 08:48:12 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:03:56.720 08:48:12 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:03:56.720 08:48:12 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:03:56.720 08:48:12 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:56.720 08:48:12 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:03:56.720 08:48:12 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:03:56.720 08:48:12 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:56.720 08:48:12 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:56.720 08:48:12 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:03:56.720 08:48:12 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:03:56.720 08:48:12 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:56.720 08:48:12 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:03:56.720 08:48:12 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:03:56.720 08:48:12 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:03:56.720 08:48:12 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:03:56.720 08:48:12 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:56.720 08:48:12 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:03:56.720 08:48:12 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:03:56.720 08:48:12 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:56.720 08:48:12 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:56.720 08:48:12 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:03:56.720 08:48:12 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:56.720 08:48:12 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:56.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.720 --rc genhtml_branch_coverage=1 00:03:56.720 --rc genhtml_function_coverage=1 00:03:56.720 --rc genhtml_legend=1 00:03:56.720 --rc geninfo_all_blocks=1 00:03:56.720 --rc geninfo_unexecuted_blocks=1 00:03:56.720 00:03:56.720 ' 00:03:56.720 08:48:12 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:56.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.720 --rc genhtml_branch_coverage=1 00:03:56.720 --rc genhtml_function_coverage=1 00:03:56.720 --rc genhtml_legend=1 00:03:56.720 --rc geninfo_all_blocks=1 00:03:56.720 --rc geninfo_unexecuted_blocks=1 00:03:56.720 00:03:56.720 ' 00:03:56.720 08:48:12 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:56.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.720 --rc genhtml_branch_coverage=1 00:03:56.720 --rc genhtml_function_coverage=1 00:03:56.720 --rc genhtml_legend=1 00:03:56.720 --rc geninfo_all_blocks=1 00:03:56.720 --rc geninfo_unexecuted_blocks=1 00:03:56.720 00:03:56.720 ' 00:03:56.720 08:48:12 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:56.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:56.720 --rc genhtml_branch_coverage=1 00:03:56.720 --rc genhtml_function_coverage=1 00:03:56.720 --rc genhtml_legend=1 00:03:56.720 --rc geninfo_all_blocks=1 00:03:56.720 --rc geninfo_unexecuted_blocks=1 00:03:56.720 00:03:56.720 ' 00:03:56.720 08:48:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:03:56.720 08:48:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2148182 00:03:56.720 08:48:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:56.720 08:48:12 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2148182 00:03:56.720 08:48:12 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 2148182 ']' 00:03:56.720 08:48:12 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:56.721 08:48:12 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:03:56.721 08:48:12 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:56.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:56.721 08:48:12 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:03:56.721 08:48:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:56.979 [2024-11-20 08:48:12.774429] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:03:56.979 [2024-11-20 08:48:12.774477] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2148182 ] 00:03:56.979 [2024-11-20 08:48:12.849706] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:56.979 [2024-11-20 08:48:12.892170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:57.239 08:48:13 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:03:57.239 08:48:13 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:03:57.239 08:48:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:03:57.239 08:48:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:03:57.239 08:48:13 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:03:57.239 08:48:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:57.239 { 00:03:57.239 "filename": "/tmp/spdk_mem_dump.txt" 00:03:57.239 } 00:03:57.240 08:48:13 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:03:57.240 08:48:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:03:57.240 DPDK memory size 810.000000 MiB in 1 heap(s) 00:03:57.240 1 heaps totaling size 810.000000 MiB 00:03:57.240 size: 810.000000 MiB heap id: 0 00:03:57.240 end heaps---------- 00:03:57.240 9 mempools totaling size 595.772034 MiB 00:03:57.240 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:03:57.240 size: 158.602051 MiB name: PDU_data_out_Pool 00:03:57.240 size: 92.545471 MiB name: bdev_io_2148182 00:03:57.240 size: 50.003479 MiB name: msgpool_2148182 00:03:57.240 size: 36.509338 MiB name: fsdev_io_2148182 00:03:57.240 size: 21.763794 MiB name: PDU_Pool 00:03:57.240 size: 19.513306 MiB name: SCSI_TASK_Pool 00:03:57.240 size: 4.133484 MiB name: evtpool_2148182 00:03:57.240 size: 0.026123 MiB name: Session_Pool 00:03:57.240 end mempools------- 00:03:57.240 6 memzones totaling size 4.142822 MiB 00:03:57.240 size: 1.000366 MiB name: RG_ring_0_2148182 00:03:57.240 size: 1.000366 MiB name: RG_ring_1_2148182 00:03:57.240 size: 1.000366 MiB name: RG_ring_4_2148182 00:03:57.240 size: 1.000366 MiB name: RG_ring_5_2148182 00:03:57.240 size: 0.125366 MiB name: RG_ring_2_2148182 00:03:57.240 size: 0.015991 MiB name: RG_ring_3_2148182 00:03:57.240 end memzones------- 00:03:57.240 08:48:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:03:57.240 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:03:57.240 list of free elements. size: 10.862488 MiB 00:03:57.240 element at address: 0x200018a00000 with size: 0.999878 MiB 00:03:57.240 element at address: 0x200018c00000 with size: 0.999878 MiB 00:03:57.240 element at address: 0x200000400000 with size: 0.998535 MiB 00:03:57.240 element at address: 0x200031800000 with size: 0.994446 MiB 00:03:57.240 element at address: 0x200006400000 with size: 0.959839 MiB 00:03:57.240 element at address: 0x200012c00000 with size: 0.954285 MiB 00:03:57.240 element at address: 0x200018e00000 with size: 0.936584 MiB 00:03:57.240 element at address: 0x200000200000 with size: 0.717346 MiB 00:03:57.240 element at address: 0x20001a600000 with size: 0.582886 MiB 00:03:57.240 element at address: 0x200000c00000 with size: 0.495422 MiB 00:03:57.240 element at address: 0x20000a600000 with size: 0.490723 MiB 00:03:57.240 element at address: 0x200019000000 with size: 0.485657 MiB 00:03:57.240 element at address: 0x200003e00000 with size: 0.481934 MiB 00:03:57.240 element at address: 0x200027a00000 with size: 0.410034 MiB 00:03:57.240 element at address: 0x200000800000 with size: 0.355042 MiB 00:03:57.240 list of standard malloc elements. size: 199.218628 MiB 00:03:57.240 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:03:57.240 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:03:57.240 element at address: 0x200018afff80 with size: 1.000122 MiB 00:03:57.240 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:03:57.240 element at address: 0x200018efff80 with size: 1.000122 MiB 00:03:57.240 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:03:57.240 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:03:57.240 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:03:57.240 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:03:57.240 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:03:57.240 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:03:57.240 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:03:57.240 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:03:57.240 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:03:57.240 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:03:57.240 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:03:57.240 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:03:57.240 element at address: 0x20000085b040 with size: 0.000183 MiB 00:03:57.240 element at address: 0x20000085f300 with size: 0.000183 MiB 00:03:57.240 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:03:57.240 element at address: 0x20000087f680 with size: 0.000183 MiB 00:03:57.240 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:03:57.240 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:03:57.240 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:03:57.240 element at address: 0x200000cff000 with size: 0.000183 MiB 00:03:57.240 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:03:57.240 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:03:57.240 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:03:57.240 element at address: 0x200003efb980 with size: 0.000183 MiB 00:03:57.240 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:03:57.240 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:03:57.240 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:03:57.240 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:03:57.240 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:03:57.240 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:03:57.240 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:03:57.240 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:03:57.240 element at address: 0x20001a695380 with size: 0.000183 MiB 00:03:57.240 element at address: 0x20001a695440 with size: 0.000183 MiB 00:03:57.240 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:03:57.240 element at address: 0x200027a69040 with size: 0.000183 MiB 00:03:57.240 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:03:57.240 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:03:57.240 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:03:57.240 list of memzone associated elements. size: 599.918884 MiB 00:03:57.240 element at address: 0x20001a695500 with size: 211.416748 MiB 00:03:57.240 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:03:57.240 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:03:57.240 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:03:57.240 element at address: 0x200012df4780 with size: 92.045044 MiB 00:03:57.240 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_2148182_0 00:03:57.240 element at address: 0x200000dff380 with size: 48.003052 MiB 00:03:57.240 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2148182_0 00:03:57.240 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:03:57.240 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_2148182_0 00:03:57.240 element at address: 0x2000191be940 with size: 20.255554 MiB 00:03:57.240 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:03:57.240 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:03:57.240 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:03:57.240 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:03:57.240 associated memzone info: size: 3.000122 MiB name: MP_evtpool_2148182_0 00:03:57.240 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:03:57.240 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2148182 00:03:57.240 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:03:57.240 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2148182 00:03:57.240 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:03:57.240 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:03:57.240 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:03:57.240 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:03:57.240 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:03:57.240 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:03:57.240 element at address: 0x200003efba40 with size: 1.008118 MiB 00:03:57.240 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:03:57.240 element at address: 0x200000cff180 with size: 1.000488 MiB 00:03:57.240 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2148182 00:03:57.240 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:03:57.240 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2148182 00:03:57.240 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:03:57.240 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2148182 00:03:57.240 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:03:57.240 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2148182 00:03:57.240 element at address: 0x20000087f740 with size: 0.500488 MiB 00:03:57.240 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_2148182 00:03:57.240 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:03:57.240 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2148182 00:03:57.241 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:03:57.241 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:03:57.241 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:03:57.241 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:03:57.241 element at address: 0x20001907c540 with size: 0.250488 MiB 00:03:57.241 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:03:57.241 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:03:57.241 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_2148182 00:03:57.241 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:03:57.241 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2148182 00:03:57.241 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:03:57.241 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:03:57.241 element at address: 0x200027a69100 with size: 0.023743 MiB 00:03:57.241 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:03:57.241 element at address: 0x20000085b100 with size: 0.016113 MiB 00:03:57.241 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2148182 00:03:57.241 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:03:57.241 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:03:57.241 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:03:57.241 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2148182 00:03:57.241 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:03:57.241 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_2148182 00:03:57.241 element at address: 0x20000085af00 with size: 0.000305 MiB 00:03:57.241 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2148182 00:03:57.241 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:03:57.241 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:03:57.241 08:48:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:03:57.241 08:48:13 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2148182 00:03:57.241 08:48:13 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 2148182 ']' 00:03:57.241 08:48:13 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 2148182 00:03:57.241 08:48:13 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:03:57.241 08:48:13 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:03:57.241 08:48:13 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2148182 00:03:57.241 08:48:13 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:03:57.241 08:48:13 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:03:57.241 08:48:13 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2148182' 00:03:57.241 killing process with pid 2148182 00:03:57.241 08:48:13 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 2148182 00:03:57.241 08:48:13 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 2148182 00:03:57.810 00:03:57.810 real 0m1.018s 00:03:57.810 user 0m0.957s 00:03:57.810 sys 0m0.422s 00:03:57.810 08:48:13 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:57.810 08:48:13 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:03:57.810 ************************************ 00:03:57.810 END TEST dpdk_mem_utility 00:03:57.810 ************************************ 00:03:57.810 08:48:13 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:03:57.810 08:48:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:03:57.810 08:48:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.810 08:48:13 -- common/autotest_common.sh@10 -- # set +x 00:03:57.810 ************************************ 00:03:57.810 START TEST event 00:03:57.810 ************************************ 00:03:57.810 08:48:13 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:03:57.810 * Looking for test storage... 00:03:57.810 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:03:57.810 08:48:13 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:57.810 08:48:13 event -- common/autotest_common.sh@1693 -- # lcov --version 00:03:57.810 08:48:13 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:57.810 08:48:13 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:57.810 08:48:13 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:57.810 08:48:13 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:57.810 08:48:13 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:57.810 08:48:13 event -- scripts/common.sh@336 -- # IFS=.-: 00:03:57.810 08:48:13 event -- scripts/common.sh@336 -- # read -ra ver1 00:03:57.810 08:48:13 event -- scripts/common.sh@337 -- # IFS=.-: 00:03:57.810 08:48:13 event -- scripts/common.sh@337 -- # read -ra ver2 00:03:57.810 08:48:13 event -- scripts/common.sh@338 -- # local 'op=<' 00:03:57.810 08:48:13 event -- scripts/common.sh@340 -- # ver1_l=2 00:03:57.810 08:48:13 event -- scripts/common.sh@341 -- # ver2_l=1 00:03:57.810 08:48:13 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:57.810 08:48:13 event -- scripts/common.sh@344 -- # case "$op" in 00:03:57.810 08:48:13 event -- scripts/common.sh@345 -- # : 1 00:03:57.810 08:48:13 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:57.810 08:48:13 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:57.810 08:48:13 event -- scripts/common.sh@365 -- # decimal 1 00:03:57.810 08:48:13 event -- scripts/common.sh@353 -- # local d=1 00:03:57.810 08:48:13 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:57.810 08:48:13 event -- scripts/common.sh@355 -- # echo 1 00:03:57.810 08:48:13 event -- scripts/common.sh@365 -- # ver1[v]=1 00:03:57.810 08:48:13 event -- scripts/common.sh@366 -- # decimal 2 00:03:57.810 08:48:13 event -- scripts/common.sh@353 -- # local d=2 00:03:57.810 08:48:13 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:57.810 08:48:13 event -- scripts/common.sh@355 -- # echo 2 00:03:57.810 08:48:13 event -- scripts/common.sh@366 -- # ver2[v]=2 00:03:57.810 08:48:13 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:57.810 08:48:13 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:57.810 08:48:13 event -- scripts/common.sh@368 -- # return 0 00:03:57.810 08:48:13 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:57.810 08:48:13 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:57.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.810 --rc genhtml_branch_coverage=1 00:03:57.810 --rc genhtml_function_coverage=1 00:03:57.810 --rc genhtml_legend=1 00:03:57.810 --rc geninfo_all_blocks=1 00:03:57.810 --rc geninfo_unexecuted_blocks=1 00:03:57.810 00:03:57.810 ' 00:03:57.810 08:48:13 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:57.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.810 --rc genhtml_branch_coverage=1 00:03:57.810 --rc genhtml_function_coverage=1 00:03:57.810 --rc genhtml_legend=1 00:03:57.810 --rc geninfo_all_blocks=1 00:03:57.810 --rc geninfo_unexecuted_blocks=1 00:03:57.810 00:03:57.810 ' 00:03:57.810 08:48:13 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:57.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.810 --rc genhtml_branch_coverage=1 00:03:57.810 --rc genhtml_function_coverage=1 00:03:57.810 --rc genhtml_legend=1 00:03:57.810 --rc geninfo_all_blocks=1 00:03:57.810 --rc geninfo_unexecuted_blocks=1 00:03:57.810 00:03:57.810 ' 00:03:57.810 08:48:13 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:57.810 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:57.810 --rc genhtml_branch_coverage=1 00:03:57.810 --rc genhtml_function_coverage=1 00:03:57.810 --rc genhtml_legend=1 00:03:57.810 --rc geninfo_all_blocks=1 00:03:57.810 --rc geninfo_unexecuted_blocks=1 00:03:57.810 00:03:57.810 ' 00:03:57.810 08:48:13 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:03:57.810 08:48:13 event -- bdev/nbd_common.sh@6 -- # set -e 00:03:57.810 08:48:13 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:57.810 08:48:13 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:03:57.810 08:48:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:57.810 08:48:13 event -- common/autotest_common.sh@10 -- # set +x 00:03:57.810 ************************************ 00:03:57.810 START TEST event_perf 00:03:57.810 ************************************ 00:03:57.810 08:48:13 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:03:58.069 Running I/O for 1 seconds...[2024-11-20 08:48:13.863000] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:03:58.069 [2024-11-20 08:48:13.863071] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2148472 ] 00:03:58.069 [2024-11-20 08:48:13.942707] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:03:58.069 [2024-11-20 08:48:13.986635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:03:58.069 [2024-11-20 08:48:13.986745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:03:58.069 [2024-11-20 08:48:13.986852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:03:58.069 [2024-11-20 08:48:13.986853] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:03:59.004 Running I/O for 1 seconds... 00:03:59.004 lcore 0: 202279 00:03:59.004 lcore 1: 202278 00:03:59.004 lcore 2: 202278 00:03:59.005 lcore 3: 202277 00:03:59.005 done. 00:03:59.005 00:03:59.005 real 0m1.186s 00:03:59.005 user 0m4.095s 00:03:59.005 sys 0m0.087s 00:03:59.005 08:48:15 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:03:59.005 08:48:15 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:03:59.005 ************************************ 00:03:59.005 END TEST event_perf 00:03:59.005 ************************************ 00:03:59.264 08:48:15 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:03:59.264 08:48:15 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:03:59.264 08:48:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:03:59.264 08:48:15 event -- common/autotest_common.sh@10 -- # set +x 00:03:59.264 ************************************ 00:03:59.264 START TEST event_reactor 00:03:59.264 ************************************ 00:03:59.264 08:48:15 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:03:59.264 [2024-11-20 08:48:15.115679] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:03:59.264 [2024-11-20 08:48:15.115745] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2148732 ] 00:03:59.264 [2024-11-20 08:48:15.195056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:59.264 [2024-11-20 08:48:15.234960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:00.641 test_start 00:04:00.641 oneshot 00:04:00.641 tick 100 00:04:00.641 tick 100 00:04:00.641 tick 250 00:04:00.641 tick 100 00:04:00.641 tick 100 00:04:00.641 tick 100 00:04:00.641 tick 250 00:04:00.641 tick 500 00:04:00.641 tick 100 00:04:00.641 tick 100 00:04:00.641 tick 250 00:04:00.641 tick 100 00:04:00.641 tick 100 00:04:00.641 test_end 00:04:00.641 00:04:00.641 real 0m1.178s 00:04:00.641 user 0m1.096s 00:04:00.641 sys 0m0.077s 00:04:00.641 08:48:16 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:00.641 08:48:16 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:00.641 ************************************ 00:04:00.641 END TEST event_reactor 00:04:00.641 ************************************ 00:04:00.641 08:48:16 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:00.641 08:48:16 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:00.641 08:48:16 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:00.641 08:48:16 event -- common/autotest_common.sh@10 -- # set +x 00:04:00.641 ************************************ 00:04:00.641 START TEST event_reactor_perf 00:04:00.641 ************************************ 00:04:00.641 08:48:16 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:00.641 [2024-11-20 08:48:16.365276] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:04:00.641 [2024-11-20 08:48:16.365349] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2148979 ] 00:04:00.641 [2024-11-20 08:48:16.442687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:00.641 [2024-11-20 08:48:16.482329] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:01.579 test_start 00:04:01.579 test_end 00:04:01.579 Performance: 501358 events per second 00:04:01.579 00:04:01.579 real 0m1.174s 00:04:01.579 user 0m1.097s 00:04:01.579 sys 0m0.073s 00:04:01.579 08:48:17 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:01.579 08:48:17 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:01.579 ************************************ 00:04:01.579 END TEST event_reactor_perf 00:04:01.579 ************************************ 00:04:01.579 08:48:17 event -- event/event.sh@49 -- # uname -s 00:04:01.579 08:48:17 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:01.579 08:48:17 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:01.579 08:48:17 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:01.579 08:48:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:01.579 08:48:17 event -- common/autotest_common.sh@10 -- # set +x 00:04:01.579 ************************************ 00:04:01.579 START TEST event_scheduler 00:04:01.579 ************************************ 00:04:01.579 08:48:17 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:01.839 * Looking for test storage... 00:04:01.839 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:01.839 08:48:17 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:01.839 08:48:17 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:01.839 08:48:17 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:01.839 08:48:17 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:01.839 08:48:17 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:01.839 08:48:17 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:01.839 08:48:17 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:01.839 08:48:17 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:01.839 08:48:17 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:01.839 08:48:17 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:01.839 08:48:17 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:01.839 08:48:17 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:01.839 08:48:17 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:01.839 08:48:17 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:01.839 08:48:17 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:01.839 08:48:17 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:01.839 08:48:17 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:01.839 08:48:17 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:01.839 08:48:17 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:01.839 08:48:17 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:01.839 08:48:17 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:01.839 08:48:17 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:01.839 08:48:17 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:01.839 08:48:17 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:01.839 08:48:17 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:01.839 08:48:17 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:01.839 08:48:17 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:01.839 08:48:17 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:01.839 08:48:17 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:01.839 08:48:17 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:01.839 08:48:17 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:01.839 08:48:17 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:01.839 08:48:17 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:01.839 08:48:17 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:01.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.839 --rc genhtml_branch_coverage=1 00:04:01.839 --rc genhtml_function_coverage=1 00:04:01.839 --rc genhtml_legend=1 00:04:01.839 --rc geninfo_all_blocks=1 00:04:01.839 --rc geninfo_unexecuted_blocks=1 00:04:01.839 00:04:01.839 ' 00:04:01.839 08:48:17 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:01.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.839 --rc genhtml_branch_coverage=1 00:04:01.839 --rc genhtml_function_coverage=1 00:04:01.839 --rc genhtml_legend=1 00:04:01.839 --rc geninfo_all_blocks=1 00:04:01.839 --rc geninfo_unexecuted_blocks=1 00:04:01.839 00:04:01.839 ' 00:04:01.839 08:48:17 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:01.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.839 --rc genhtml_branch_coverage=1 00:04:01.839 --rc genhtml_function_coverage=1 00:04:01.839 --rc genhtml_legend=1 00:04:01.839 --rc geninfo_all_blocks=1 00:04:01.839 --rc geninfo_unexecuted_blocks=1 00:04:01.839 00:04:01.839 ' 00:04:01.839 08:48:17 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:01.839 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:01.839 --rc genhtml_branch_coverage=1 00:04:01.839 --rc genhtml_function_coverage=1 00:04:01.839 --rc genhtml_legend=1 00:04:01.839 --rc geninfo_all_blocks=1 00:04:01.839 --rc geninfo_unexecuted_blocks=1 00:04:01.839 00:04:01.839 ' 00:04:01.839 08:48:17 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:01.839 08:48:17 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2149263 00:04:01.839 08:48:17 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:01.839 08:48:17 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:01.839 08:48:17 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2149263 00:04:01.839 08:48:17 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 2149263 ']' 00:04:01.839 08:48:17 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:01.839 08:48:17 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:01.839 08:48:17 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:01.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:01.839 08:48:17 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:01.839 08:48:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:01.839 [2024-11-20 08:48:17.819019] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:04:01.839 [2024-11-20 08:48:17.819069] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2149263 ] 00:04:02.100 [2024-11-20 08:48:17.893038] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:02.100 [2024-11-20 08:48:17.938531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.100 [2024-11-20 08:48:17.938657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:02.100 [2024-11-20 08:48:17.938763] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:02.100 [2024-11-20 08:48:17.938764] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:02.100 08:48:17 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:02.100 08:48:17 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:02.100 08:48:17 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:02.100 08:48:17 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.100 08:48:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:02.100 [2024-11-20 08:48:17.975235] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:02.100 [2024-11-20 08:48:17.975252] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:02.100 [2024-11-20 08:48:17.975261] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:02.100 [2024-11-20 08:48:17.975266] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:02.100 [2024-11-20 08:48:17.975271] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:02.100 08:48:17 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.100 08:48:17 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:02.100 08:48:17 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.100 08:48:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:02.100 [2024-11-20 08:48:18.049472] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:02.100 08:48:18 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.100 08:48:18 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:02.100 08:48:18 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:02.100 08:48:18 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:02.100 08:48:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:02.100 ************************************ 00:04:02.100 START TEST scheduler_create_thread 00:04:02.100 ************************************ 00:04:02.100 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:02.100 08:48:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:02.100 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.100 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:02.100 2 00:04:02.100 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.100 08:48:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:02.100 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.100 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:02.100 3 00:04:02.100 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.100 08:48:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:02.100 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.100 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:02.100 4 00:04:02.100 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.100 08:48:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:02.100 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.100 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:02.100 5 00:04:02.100 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.100 08:48:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:02.100 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.100 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:02.100 6 00:04:02.100 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.100 08:48:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:02.360 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.360 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:02.360 7 00:04:02.360 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.360 08:48:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:02.360 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.360 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:02.360 8 00:04:02.360 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.360 08:48:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:02.360 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.360 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:02.360 9 00:04:02.360 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.360 08:48:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:02.360 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.360 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:02.360 10 00:04:02.360 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.360 08:48:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:02.360 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.360 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:02.360 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:02.360 08:48:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:02.360 08:48:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:02.360 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:02.360 08:48:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:03.298 08:48:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:03.298 08:48:19 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:03.298 08:48:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:03.298 08:48:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:04.674 08:48:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:04.674 08:48:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:04.674 08:48:20 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:04.674 08:48:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:04.674 08:48:20 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:05.610 08:48:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:05.610 00:04:05.610 real 0m3.382s 00:04:05.610 user 0m0.026s 00:04:05.610 sys 0m0.004s 00:04:05.610 08:48:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:05.610 08:48:21 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:05.610 ************************************ 00:04:05.610 END TEST scheduler_create_thread 00:04:05.610 ************************************ 00:04:05.610 08:48:21 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:05.610 08:48:21 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2149263 00:04:05.610 08:48:21 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 2149263 ']' 00:04:05.610 08:48:21 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 2149263 00:04:05.610 08:48:21 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:05.610 08:48:21 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:05.610 08:48:21 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2149263 00:04:05.610 08:48:21 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:05.610 08:48:21 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:05.610 08:48:21 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2149263' 00:04:05.610 killing process with pid 2149263 00:04:05.610 08:48:21 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 2149263 00:04:05.610 08:48:21 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 2149263 00:04:05.869 [2024-11-20 08:48:21.845749] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:06.128 00:04:06.128 real 0m4.453s 00:04:06.128 user 0m7.772s 00:04:06.128 sys 0m0.368s 00:04:06.128 08:48:22 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:06.128 08:48:22 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:06.128 ************************************ 00:04:06.128 END TEST event_scheduler 00:04:06.128 ************************************ 00:04:06.128 08:48:22 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:06.128 08:48:22 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:06.128 08:48:22 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:06.128 08:48:22 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:06.128 08:48:22 event -- common/autotest_common.sh@10 -- # set +x 00:04:06.128 ************************************ 00:04:06.128 START TEST app_repeat 00:04:06.128 ************************************ 00:04:06.128 08:48:22 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:06.128 08:48:22 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:06.128 08:48:22 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:06.128 08:48:22 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:06.128 08:48:22 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:06.128 08:48:22 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:06.128 08:48:22 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:06.128 08:48:22 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:06.128 08:48:22 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2150005 00:04:06.128 08:48:22 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.128 08:48:22 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:06.128 08:48:22 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2150005' 00:04:06.128 Process app_repeat pid: 2150005 00:04:06.128 08:48:22 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:06.128 08:48:22 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:06.128 spdk_app_start Round 0 00:04:06.128 08:48:22 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2150005 /var/tmp/spdk-nbd.sock 00:04:06.128 08:48:22 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2150005 ']' 00:04:06.128 08:48:22 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:06.128 08:48:22 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:06.129 08:48:22 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:06.129 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:06.129 08:48:22 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:06.129 08:48:22 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:06.129 [2024-11-20 08:48:22.159221] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:04:06.129 [2024-11-20 08:48:22.159274] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2150005 ] 00:04:06.387 [2024-11-20 08:48:22.236887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:06.387 [2024-11-20 08:48:22.278106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:06.387 [2024-11-20 08:48:22.278107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.387 08:48:22 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:06.387 08:48:22 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:06.387 08:48:22 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:06.646 Malloc0 00:04:06.646 08:48:22 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:06.904 Malloc1 00:04:06.904 08:48:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:06.904 08:48:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:06.904 08:48:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:06.904 08:48:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:06.904 08:48:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:06.904 08:48:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:06.904 08:48:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:06.904 08:48:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:06.904 08:48:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:06.904 08:48:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:06.904 08:48:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:06.904 08:48:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:06.904 08:48:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:06.904 08:48:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:06.904 08:48:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:06.904 08:48:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:07.163 /dev/nbd0 00:04:07.163 08:48:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:07.163 08:48:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:07.163 08:48:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:07.163 08:48:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:07.163 08:48:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:07.163 08:48:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:07.163 08:48:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:07.163 08:48:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:07.163 08:48:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:07.163 08:48:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:07.163 08:48:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:07.163 1+0 records in 00:04:07.163 1+0 records out 00:04:07.163 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000159481 s, 25.7 MB/s 00:04:07.163 08:48:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:07.163 08:48:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:07.163 08:48:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:07.163 08:48:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:07.163 08:48:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:07.163 08:48:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:07.163 08:48:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:07.163 08:48:23 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:07.421 /dev/nbd1 00:04:07.421 08:48:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:07.421 08:48:23 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:07.421 08:48:23 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:07.421 08:48:23 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:07.421 08:48:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:07.421 08:48:23 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:07.421 08:48:23 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:07.421 08:48:23 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:07.421 08:48:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:07.422 08:48:23 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:07.422 08:48:23 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:07.422 1+0 records in 00:04:07.422 1+0 records out 00:04:07.422 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000236592 s, 17.3 MB/s 00:04:07.422 08:48:23 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:07.422 08:48:23 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:07.422 08:48:23 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:07.422 08:48:23 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:07.422 08:48:23 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:07.422 08:48:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:07.422 08:48:23 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:07.422 08:48:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:07.422 08:48:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:07.422 08:48:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:07.681 { 00:04:07.681 "nbd_device": "/dev/nbd0", 00:04:07.681 "bdev_name": "Malloc0" 00:04:07.681 }, 00:04:07.681 { 00:04:07.681 "nbd_device": "/dev/nbd1", 00:04:07.681 "bdev_name": "Malloc1" 00:04:07.681 } 00:04:07.681 ]' 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:07.681 { 00:04:07.681 "nbd_device": "/dev/nbd0", 00:04:07.681 "bdev_name": "Malloc0" 00:04:07.681 }, 00:04:07.681 { 00:04:07.681 "nbd_device": "/dev/nbd1", 00:04:07.681 "bdev_name": "Malloc1" 00:04:07.681 } 00:04:07.681 ]' 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:07.681 /dev/nbd1' 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:07.681 /dev/nbd1' 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:07.681 256+0 records in 00:04:07.681 256+0 records out 00:04:07.681 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106781 s, 98.2 MB/s 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:07.681 256+0 records in 00:04:07.681 256+0 records out 00:04:07.681 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0143652 s, 73.0 MB/s 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:07.681 256+0 records in 00:04:07.681 256+0 records out 00:04:07.681 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0149875 s, 70.0 MB/s 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:07.681 08:48:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:07.939 08:48:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:07.939 08:48:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:07.939 08:48:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:07.939 08:48:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:07.939 08:48:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:07.939 08:48:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:07.939 08:48:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:07.939 08:48:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:07.939 08:48:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:07.939 08:48:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:08.198 08:48:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:08.198 08:48:24 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:08.198 08:48:24 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:08.198 08:48:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:08.198 08:48:24 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:08.198 08:48:24 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:08.198 08:48:24 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:08.198 08:48:24 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:08.198 08:48:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:08.198 08:48:24 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:08.198 08:48:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:08.457 08:48:24 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:08.457 08:48:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:08.457 08:48:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:08.457 08:48:24 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:08.457 08:48:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:08.457 08:48:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:08.457 08:48:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:08.457 08:48:24 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:08.457 08:48:24 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:08.457 08:48:24 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:08.457 08:48:24 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:08.457 08:48:24 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:08.457 08:48:24 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:08.716 08:48:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:08.716 [2024-11-20 08:48:24.664684] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:08.716 [2024-11-20 08:48:24.701783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:08.716 [2024-11-20 08:48:24.701784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:08.716 [2024-11-20 08:48:24.742702] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:08.716 [2024-11-20 08:48:24.742745] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:12.000 08:48:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:12.000 08:48:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:12.000 spdk_app_start Round 1 00:04:12.000 08:48:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2150005 /var/tmp/spdk-nbd.sock 00:04:12.000 08:48:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2150005 ']' 00:04:12.000 08:48:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:12.000 08:48:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:12.000 08:48:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:12.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:12.000 08:48:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:12.000 08:48:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:12.000 08:48:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:12.000 08:48:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:12.000 08:48:27 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:12.000 Malloc0 00:04:12.000 08:48:27 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:12.258 Malloc1 00:04:12.258 08:48:28 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:12.258 08:48:28 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.258 08:48:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:12.258 08:48:28 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:12.258 08:48:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.258 08:48:28 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:12.258 08:48:28 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:12.258 08:48:28 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.258 08:48:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:12.258 08:48:28 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:12.258 08:48:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:12.258 08:48:28 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:12.258 08:48:28 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:12.258 08:48:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:12.258 08:48:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:12.259 08:48:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:12.517 /dev/nbd0 00:04:12.517 08:48:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:12.517 08:48:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:12.517 08:48:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:12.517 08:48:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:12.517 08:48:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:12.517 08:48:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:12.517 08:48:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:12.517 08:48:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:12.517 08:48:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:12.517 08:48:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:12.517 08:48:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:12.517 1+0 records in 00:04:12.517 1+0 records out 00:04:12.517 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000230669 s, 17.8 MB/s 00:04:12.517 08:48:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:12.517 08:48:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:12.517 08:48:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:12.517 08:48:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:12.517 08:48:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:12.517 08:48:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:12.517 08:48:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:12.517 08:48:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:12.776 /dev/nbd1 00:04:12.776 08:48:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:12.776 08:48:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:12.776 08:48:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:12.776 08:48:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:12.776 08:48:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:12.776 08:48:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:12.776 08:48:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:12.776 08:48:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:12.776 08:48:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:12.776 08:48:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:12.776 08:48:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:12.776 1+0 records in 00:04:12.776 1+0 records out 00:04:12.776 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000235696 s, 17.4 MB/s 00:04:12.776 08:48:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:12.776 08:48:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:12.776 08:48:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:12.776 08:48:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:12.776 08:48:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:12.776 08:48:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:12.776 08:48:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:12.776 08:48:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:12.776 08:48:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:12.776 08:48:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:13.035 08:48:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:13.035 { 00:04:13.035 "nbd_device": "/dev/nbd0", 00:04:13.035 "bdev_name": "Malloc0" 00:04:13.035 }, 00:04:13.035 { 00:04:13.035 "nbd_device": "/dev/nbd1", 00:04:13.035 "bdev_name": "Malloc1" 00:04:13.035 } 00:04:13.035 ]' 00:04:13.035 08:48:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:13.035 { 00:04:13.035 "nbd_device": "/dev/nbd0", 00:04:13.035 "bdev_name": "Malloc0" 00:04:13.035 }, 00:04:13.035 { 00:04:13.035 "nbd_device": "/dev/nbd1", 00:04:13.035 "bdev_name": "Malloc1" 00:04:13.035 } 00:04:13.035 ]' 00:04:13.035 08:48:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:13.035 08:48:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:13.035 /dev/nbd1' 00:04:13.035 08:48:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:13.035 /dev/nbd1' 00:04:13.035 08:48:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:13.035 08:48:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:13.035 08:48:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:13.035 08:48:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:13.035 08:48:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:13.035 08:48:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:13.035 08:48:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.035 08:48:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:13.035 08:48:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:13.036 08:48:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:13.036 08:48:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:13.036 08:48:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:13.036 256+0 records in 00:04:13.036 256+0 records out 00:04:13.036 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00440219 s, 238 MB/s 00:04:13.036 08:48:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:13.036 08:48:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:13.036 256+0 records in 00:04:13.036 256+0 records out 00:04:13.036 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137473 s, 76.3 MB/s 00:04:13.036 08:48:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:13.036 08:48:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:13.036 256+0 records in 00:04:13.036 256+0 records out 00:04:13.036 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150106 s, 69.9 MB/s 00:04:13.036 08:48:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:13.036 08:48:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.036 08:48:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:13.036 08:48:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:13.036 08:48:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:13.036 08:48:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:13.036 08:48:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:13.036 08:48:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:13.036 08:48:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:13.036 08:48:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:13.036 08:48:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:13.036 08:48:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:13.036 08:48:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:13.036 08:48:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.036 08:48:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:13.036 08:48:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:13.036 08:48:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:13.036 08:48:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:13.036 08:48:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:13.296 08:48:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:13.296 08:48:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:13.296 08:48:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:13.296 08:48:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:13.296 08:48:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:13.296 08:48:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:13.296 08:48:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:13.296 08:48:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:13.296 08:48:29 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:13.296 08:48:29 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:13.554 08:48:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:13.554 08:48:29 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:13.554 08:48:29 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:13.554 08:48:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:13.554 08:48:29 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:13.554 08:48:29 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:13.554 08:48:29 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:13.554 08:48:29 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:13.554 08:48:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:13.554 08:48:29 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:13.554 08:48:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:13.812 08:48:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:13.812 08:48:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:13.812 08:48:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:13.812 08:48:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:13.812 08:48:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:13.812 08:48:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:13.812 08:48:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:13.812 08:48:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:13.812 08:48:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:13.812 08:48:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:13.812 08:48:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:13.812 08:48:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:13.812 08:48:29 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:14.070 08:48:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:14.070 [2024-11-20 08:48:30.024157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:14.070 [2024-11-20 08:48:30.065601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:14.070 [2024-11-20 08:48:30.065602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.070 [2024-11-20 08:48:30.107683] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:14.070 [2024-11-20 08:48:30.107724] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:17.349 08:48:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:17.349 08:48:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:04:17.349 spdk_app_start Round 2 00:04:17.349 08:48:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2150005 /var/tmp/spdk-nbd.sock 00:04:17.349 08:48:32 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2150005 ']' 00:04:17.349 08:48:32 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:17.349 08:48:32 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:17.349 08:48:32 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:17.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:17.349 08:48:32 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:17.349 08:48:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:17.349 08:48:33 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:17.349 08:48:33 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:17.349 08:48:33 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:17.349 Malloc0 00:04:17.349 08:48:33 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:17.608 Malloc1 00:04:17.608 08:48:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:17.608 08:48:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.608 08:48:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:17.608 08:48:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:17.608 08:48:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.608 08:48:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:17.608 08:48:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:17.608 08:48:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:17.608 08:48:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:17.608 08:48:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:17.608 08:48:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:17.608 08:48:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:17.608 08:48:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:17.608 08:48:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:17.608 08:48:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:17.608 08:48:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:17.866 /dev/nbd0 00:04:17.866 08:48:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:17.866 08:48:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:17.866 08:48:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:17.866 08:48:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:17.866 08:48:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:17.866 08:48:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:17.866 08:48:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:17.866 08:48:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:17.866 08:48:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:17.866 08:48:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:17.866 08:48:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:17.866 1+0 records in 00:04:17.866 1+0 records out 00:04:17.866 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000264492 s, 15.5 MB/s 00:04:17.866 08:48:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:17.866 08:48:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:17.866 08:48:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:17.866 08:48:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:17.866 08:48:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:17.866 08:48:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:17.866 08:48:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:17.866 08:48:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:18.125 /dev/nbd1 00:04:18.125 08:48:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:18.125 08:48:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:18.125 08:48:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:18.125 08:48:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:18.125 08:48:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:18.125 08:48:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:18.125 08:48:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:18.125 08:48:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:18.125 08:48:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:18.125 08:48:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:18.125 08:48:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:18.125 1+0 records in 00:04:18.125 1+0 records out 00:04:18.125 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000202991 s, 20.2 MB/s 00:04:18.125 08:48:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.125 08:48:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:18.125 08:48:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:18.125 08:48:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:18.125 08:48:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:18.125 08:48:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:18.125 08:48:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:18.125 08:48:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:18.125 08:48:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.125 08:48:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:18.384 { 00:04:18.384 "nbd_device": "/dev/nbd0", 00:04:18.384 "bdev_name": "Malloc0" 00:04:18.384 }, 00:04:18.384 { 00:04:18.384 "nbd_device": "/dev/nbd1", 00:04:18.384 "bdev_name": "Malloc1" 00:04:18.384 } 00:04:18.384 ]' 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:18.384 { 00:04:18.384 "nbd_device": "/dev/nbd0", 00:04:18.384 "bdev_name": "Malloc0" 00:04:18.384 }, 00:04:18.384 { 00:04:18.384 "nbd_device": "/dev/nbd1", 00:04:18.384 "bdev_name": "Malloc1" 00:04:18.384 } 00:04:18.384 ]' 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:18.384 /dev/nbd1' 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:18.384 /dev/nbd1' 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:18.384 256+0 records in 00:04:18.384 256+0 records out 00:04:18.384 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00988695 s, 106 MB/s 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:18.384 256+0 records in 00:04:18.384 256+0 records out 00:04:18.384 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0140624 s, 74.6 MB/s 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:18.384 256+0 records in 00:04:18.384 256+0 records out 00:04:18.384 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151495 s, 69.2 MB/s 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:18.384 08:48:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:18.643 08:48:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:18.643 08:48:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:18.643 08:48:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:18.643 08:48:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:18.643 08:48:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:18.643 08:48:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:18.643 08:48:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:18.643 08:48:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:18.643 08:48:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:18.643 08:48:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:18.902 08:48:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:18.902 08:48:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:18.902 08:48:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:18.902 08:48:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:18.902 08:48:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:18.902 08:48:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:18.902 08:48:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:18.902 08:48:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:18.902 08:48:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:18.902 08:48:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:18.902 08:48:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:18.902 08:48:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:19.161 08:48:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:19.161 08:48:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:19.161 08:48:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:19.161 08:48:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:19.161 08:48:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:19.161 08:48:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:19.161 08:48:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:19.161 08:48:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:19.161 08:48:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:19.161 08:48:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:19.161 08:48:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:19.161 08:48:34 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:19.420 08:48:35 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:19.420 [2024-11-20 08:48:35.355724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:19.420 [2024-11-20 08:48:35.393031] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:19.420 [2024-11-20 08:48:35.393033] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:19.420 [2024-11-20 08:48:35.434096] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:19.420 [2024-11-20 08:48:35.434140] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:22.762 08:48:38 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2150005 /var/tmp/spdk-nbd.sock 00:04:22.763 08:48:38 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 2150005 ']' 00:04:22.763 08:48:38 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:22.763 08:48:38 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:22.763 08:48:38 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:22.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:22.763 08:48:38 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:22.763 08:48:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:22.763 08:48:38 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:22.763 08:48:38 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:22.763 08:48:38 event.app_repeat -- event/event.sh@39 -- # killprocess 2150005 00:04:22.763 08:48:38 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 2150005 ']' 00:04:22.763 08:48:38 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 2150005 00:04:22.763 08:48:38 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:04:22.763 08:48:38 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:22.763 08:48:38 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2150005 00:04:22.763 08:48:38 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:22.763 08:48:38 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:22.763 08:48:38 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2150005' 00:04:22.763 killing process with pid 2150005 00:04:22.763 08:48:38 event.app_repeat -- common/autotest_common.sh@973 -- # kill 2150005 00:04:22.763 08:48:38 event.app_repeat -- common/autotest_common.sh@978 -- # wait 2150005 00:04:22.763 spdk_app_start is called in Round 0. 00:04:22.763 Shutdown signal received, stop current app iteration 00:04:22.763 Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 reinitialization... 00:04:22.763 spdk_app_start is called in Round 1. 00:04:22.763 Shutdown signal received, stop current app iteration 00:04:22.763 Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 reinitialization... 00:04:22.763 spdk_app_start is called in Round 2. 00:04:22.763 Shutdown signal received, stop current app iteration 00:04:22.763 Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 reinitialization... 00:04:22.763 spdk_app_start is called in Round 3. 00:04:22.763 Shutdown signal received, stop current app iteration 00:04:22.763 08:48:38 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:04:22.763 08:48:38 event.app_repeat -- event/event.sh@42 -- # return 0 00:04:22.763 00:04:22.763 real 0m16.471s 00:04:22.763 user 0m36.288s 00:04:22.763 sys 0m2.519s 00:04:22.763 08:48:38 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.763 08:48:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:22.763 ************************************ 00:04:22.763 END TEST app_repeat 00:04:22.763 ************************************ 00:04:22.763 08:48:38 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:04:22.763 08:48:38 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:22.763 08:48:38 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.763 08:48:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.763 08:48:38 event -- common/autotest_common.sh@10 -- # set +x 00:04:22.763 ************************************ 00:04:22.763 START TEST cpu_locks 00:04:22.763 ************************************ 00:04:22.763 08:48:38 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:04:22.763 * Looking for test storage... 00:04:22.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:22.763 08:48:38 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:22.763 08:48:38 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:04:22.763 08:48:38 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:23.021 08:48:38 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:23.021 08:48:38 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:23.021 08:48:38 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:23.021 08:48:38 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:23.021 08:48:38 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:04:23.021 08:48:38 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:04:23.021 08:48:38 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:04:23.021 08:48:38 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:04:23.021 08:48:38 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:04:23.021 08:48:38 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:04:23.021 08:48:38 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:04:23.021 08:48:38 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:23.021 08:48:38 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:04:23.021 08:48:38 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:04:23.021 08:48:38 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:23.021 08:48:38 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:23.021 08:48:38 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:04:23.021 08:48:38 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:04:23.021 08:48:38 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:23.021 08:48:38 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:04:23.021 08:48:38 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:04:23.021 08:48:38 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:04:23.021 08:48:38 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:04:23.021 08:48:38 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:23.021 08:48:38 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:04:23.021 08:48:38 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:04:23.021 08:48:38 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:23.021 08:48:38 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:23.021 08:48:38 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:04:23.021 08:48:38 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:23.021 08:48:38 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:23.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.022 --rc genhtml_branch_coverage=1 00:04:23.022 --rc genhtml_function_coverage=1 00:04:23.022 --rc genhtml_legend=1 00:04:23.022 --rc geninfo_all_blocks=1 00:04:23.022 --rc geninfo_unexecuted_blocks=1 00:04:23.022 00:04:23.022 ' 00:04:23.022 08:48:38 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:23.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.022 --rc genhtml_branch_coverage=1 00:04:23.022 --rc genhtml_function_coverage=1 00:04:23.022 --rc genhtml_legend=1 00:04:23.022 --rc geninfo_all_blocks=1 00:04:23.022 --rc geninfo_unexecuted_blocks=1 00:04:23.022 00:04:23.022 ' 00:04:23.022 08:48:38 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:23.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.022 --rc genhtml_branch_coverage=1 00:04:23.022 --rc genhtml_function_coverage=1 00:04:23.022 --rc genhtml_legend=1 00:04:23.022 --rc geninfo_all_blocks=1 00:04:23.022 --rc geninfo_unexecuted_blocks=1 00:04:23.022 00:04:23.022 ' 00:04:23.022 08:48:38 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:23.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:23.022 --rc genhtml_branch_coverage=1 00:04:23.022 --rc genhtml_function_coverage=1 00:04:23.022 --rc genhtml_legend=1 00:04:23.022 --rc geninfo_all_blocks=1 00:04:23.022 --rc geninfo_unexecuted_blocks=1 00:04:23.022 00:04:23.022 ' 00:04:23.022 08:48:38 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:04:23.022 08:48:38 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:04:23.022 08:48:38 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:04:23.022 08:48:38 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:04:23.022 08:48:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:23.022 08:48:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:23.022 08:48:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:23.022 ************************************ 00:04:23.022 START TEST default_locks 00:04:23.022 ************************************ 00:04:23.022 08:48:38 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:04:23.022 08:48:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:23.022 08:48:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2153009 00:04:23.022 08:48:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2153009 00:04:23.022 08:48:38 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2153009 ']' 00:04:23.022 08:48:38 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:23.022 08:48:38 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:23.022 08:48:38 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:23.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:23.022 08:48:38 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:23.022 08:48:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:23.022 [2024-11-20 08:48:38.917989] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:04:23.022 [2024-11-20 08:48:38.918050] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2153009 ] 00:04:23.022 [2024-11-20 08:48:38.993336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.022 [2024-11-20 08:48:39.033389] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.281 08:48:39 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:23.281 08:48:39 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:04:23.281 08:48:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2153009 00:04:23.281 08:48:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2153009 00:04:23.281 08:48:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:23.848 lslocks: write error 00:04:23.848 08:48:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2153009 00:04:23.848 08:48:39 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 2153009 ']' 00:04:23.848 08:48:39 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 2153009 00:04:23.848 08:48:39 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:04:23.848 08:48:39 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:23.848 08:48:39 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2153009 00:04:23.848 08:48:39 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:23.848 08:48:39 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:23.848 08:48:39 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2153009' 00:04:23.848 killing process with pid 2153009 00:04:23.848 08:48:39 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 2153009 00:04:23.848 08:48:39 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 2153009 00:04:24.108 08:48:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2153009 00:04:24.108 08:48:40 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:04:24.108 08:48:40 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2153009 00:04:24.108 08:48:40 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:24.108 08:48:40 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:24.108 08:48:40 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:24.108 08:48:40 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:24.108 08:48:40 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 2153009 00:04:24.108 08:48:40 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 2153009 ']' 00:04:24.108 08:48:40 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.108 08:48:40 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:24.108 08:48:40 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.108 08:48:40 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:24.108 08:48:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:24.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2153009) - No such process 00:04:24.108 ERROR: process (pid: 2153009) is no longer running 00:04:24.108 08:48:40 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:24.108 08:48:40 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:04:24.108 08:48:40 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:04:24.108 08:48:40 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:24.108 08:48:40 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:24.108 08:48:40 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:24.108 08:48:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:04:24.108 08:48:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:24.108 08:48:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:04:24.108 08:48:40 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:24.108 00:04:24.108 real 0m1.184s 00:04:24.108 user 0m1.140s 00:04:24.108 sys 0m0.526s 00:04:24.108 08:48:40 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.108 08:48:40 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:04:24.108 ************************************ 00:04:24.108 END TEST default_locks 00:04:24.108 ************************************ 00:04:24.108 08:48:40 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:04:24.108 08:48:40 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.108 08:48:40 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.108 08:48:40 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:24.108 ************************************ 00:04:24.108 START TEST default_locks_via_rpc 00:04:24.108 ************************************ 00:04:24.108 08:48:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:04:24.109 08:48:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2153268 00:04:24.109 08:48:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2153268 00:04:24.109 08:48:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:24.109 08:48:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2153268 ']' 00:04:24.109 08:48:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.109 08:48:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:24.109 08:48:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.109 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.109 08:48:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:24.109 08:48:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.439 [2024-11-20 08:48:40.175524] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:04:24.439 [2024-11-20 08:48:40.175568] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2153268 ] 00:04:24.440 [2024-11-20 08:48:40.248887] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:24.440 [2024-11-20 08:48:40.290068] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.754 08:48:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:24.754 08:48:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:24.754 08:48:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:04:24.754 08:48:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.754 08:48:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.754 08:48:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.754 08:48:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:04:24.754 08:48:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:04:24.755 08:48:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:04:24.755 08:48:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:04:24.755 08:48:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:04:24.755 08:48:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.755 08:48:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.755 08:48:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:24.755 08:48:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2153268 00:04:24.755 08:48:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2153268 00:04:24.755 08:48:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:24.755 08:48:40 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2153268 00:04:24.755 08:48:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 2153268 ']' 00:04:24.755 08:48:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 2153268 00:04:24.755 08:48:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:04:24.755 08:48:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:24.755 08:48:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2153268 00:04:25.052 08:48:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:25.052 08:48:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:25.053 08:48:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2153268' 00:04:25.053 killing process with pid 2153268 00:04:25.053 08:48:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 2153268 00:04:25.053 08:48:40 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 2153268 00:04:25.312 00:04:25.312 real 0m0.989s 00:04:25.312 user 0m0.948s 00:04:25.312 sys 0m0.448s 00:04:25.312 08:48:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:25.312 08:48:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:25.312 ************************************ 00:04:25.312 END TEST default_locks_via_rpc 00:04:25.312 ************************************ 00:04:25.312 08:48:41 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:04:25.312 08:48:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:25.312 08:48:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:25.312 08:48:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:25.312 ************************************ 00:04:25.312 START TEST non_locking_app_on_locked_coremask 00:04:25.312 ************************************ 00:04:25.312 08:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:04:25.312 08:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2153528 00:04:25.313 08:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2153528 /var/tmp/spdk.sock 00:04:25.313 08:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:25.313 08:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2153528 ']' 00:04:25.313 08:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:25.313 08:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:25.313 08:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:25.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:25.313 08:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:25.313 08:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:25.313 [2024-11-20 08:48:41.232534] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:04:25.313 [2024-11-20 08:48:41.232577] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2153528 ] 00:04:25.313 [2024-11-20 08:48:41.289625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.313 [2024-11-20 08:48:41.332814] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.572 08:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:25.572 08:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:25.572 08:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2153531 00:04:25.572 08:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2153531 /var/tmp/spdk2.sock 00:04:25.572 08:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:04:25.572 08:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2153531 ']' 00:04:25.572 08:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:25.572 08:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:25.572 08:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:25.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:25.572 08:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:25.572 08:48:41 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:25.572 [2024-11-20 08:48:41.594939] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:04:25.573 [2024-11-20 08:48:41.594990] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2153531 ] 00:04:25.831 [2024-11-20 08:48:41.681937] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:25.831 [2024-11-20 08:48:41.681963] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.831 [2024-11-20 08:48:41.767327] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:26.398 08:48:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:26.399 08:48:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:26.399 08:48:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2153528 00:04:26.399 08:48:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2153528 00:04:26.399 08:48:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:26.965 lslocks: write error 00:04:26.965 08:48:42 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2153528 00:04:26.965 08:48:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2153528 ']' 00:04:26.965 08:48:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2153528 00:04:26.965 08:48:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:26.965 08:48:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:26.965 08:48:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2153528 00:04:26.965 08:48:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:26.965 08:48:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:26.965 08:48:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2153528' 00:04:26.965 killing process with pid 2153528 00:04:26.965 08:48:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2153528 00:04:26.965 08:48:42 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2153528 00:04:27.532 08:48:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2153531 00:04:27.532 08:48:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2153531 ']' 00:04:27.532 08:48:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2153531 00:04:27.532 08:48:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:27.532 08:48:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:27.532 08:48:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2153531 00:04:27.791 08:48:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:27.791 08:48:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:27.791 08:48:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2153531' 00:04:27.791 killing process with pid 2153531 00:04:27.791 08:48:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2153531 00:04:27.791 08:48:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2153531 00:04:28.050 00:04:28.050 real 0m2.707s 00:04:28.050 user 0m2.871s 00:04:28.050 sys 0m0.887s 00:04:28.050 08:48:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:28.050 08:48:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:28.050 ************************************ 00:04:28.050 END TEST non_locking_app_on_locked_coremask 00:04:28.050 ************************************ 00:04:28.050 08:48:43 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:04:28.050 08:48:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:28.050 08:48:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:28.050 08:48:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:28.050 ************************************ 00:04:28.050 START TEST locking_app_on_unlocked_coremask 00:04:28.050 ************************************ 00:04:28.050 08:48:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:04:28.050 08:48:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2154025 00:04:28.050 08:48:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2154025 /var/tmp/spdk.sock 00:04:28.050 08:48:43 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:04:28.050 08:48:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2154025 ']' 00:04:28.050 08:48:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.050 08:48:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.050 08:48:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.050 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.050 08:48:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.050 08:48:43 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:28.050 [2024-11-20 08:48:44.009278] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:04:28.050 [2024-11-20 08:48:44.009321] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2154025 ] 00:04:28.050 [2024-11-20 08:48:44.081350] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:28.050 [2024-11-20 08:48:44.081379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.308 [2024-11-20 08:48:44.119412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:28.308 08:48:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:28.308 08:48:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:28.308 08:48:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2154036 00:04:28.308 08:48:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2154036 /var/tmp/spdk2.sock 00:04:28.308 08:48:44 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:28.308 08:48:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2154036 ']' 00:04:28.308 08:48:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:28.308 08:48:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:28.308 08:48:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:28.308 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:28.308 08:48:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:28.308 08:48:44 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:28.566 [2024-11-20 08:48:44.393376] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:04:28.566 [2024-11-20 08:48:44.393427] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2154036 ] 00:04:28.566 [2024-11-20 08:48:44.483643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.567 [2024-11-20 08:48:44.564619] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.504 08:48:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.504 08:48:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:29.504 08:48:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2154036 00:04:29.504 08:48:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2154036 00:04:29.504 08:48:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:29.763 lslocks: write error 00:04:29.763 08:48:45 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2154025 00:04:29.763 08:48:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2154025 ']' 00:04:29.763 08:48:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2154025 00:04:29.763 08:48:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:29.763 08:48:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:29.763 08:48:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2154025 00:04:29.763 08:48:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:29.763 08:48:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:29.763 08:48:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2154025' 00:04:29.763 killing process with pid 2154025 00:04:29.763 08:48:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2154025 00:04:29.763 08:48:45 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2154025 00:04:30.331 08:48:46 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2154036 00:04:30.331 08:48:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2154036 ']' 00:04:30.331 08:48:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 2154036 00:04:30.331 08:48:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:30.331 08:48:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:30.331 08:48:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2154036 00:04:30.331 08:48:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:30.331 08:48:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:30.331 08:48:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2154036' 00:04:30.331 killing process with pid 2154036 00:04:30.331 08:48:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 2154036 00:04:30.331 08:48:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 2154036 00:04:30.590 00:04:30.590 real 0m2.659s 00:04:30.590 user 0m2.803s 00:04:30.590 sys 0m0.865s 00:04:30.590 08:48:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.590 08:48:46 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:30.591 ************************************ 00:04:30.591 END TEST locking_app_on_unlocked_coremask 00:04:30.591 ************************************ 00:04:30.850 08:48:46 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:04:30.850 08:48:46 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.850 08:48:46 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.850 08:48:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:30.850 ************************************ 00:04:30.850 START TEST locking_app_on_locked_coremask 00:04:30.850 ************************************ 00:04:30.850 08:48:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:04:30.850 08:48:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2154524 00:04:30.850 08:48:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2154524 /var/tmp/spdk.sock 00:04:30.850 08:48:46 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:30.850 08:48:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2154524 ']' 00:04:30.850 08:48:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:30.850 08:48:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.850 08:48:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:30.850 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:30.850 08:48:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.850 08:48:46 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:30.850 [2024-11-20 08:48:46.736204] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:04:30.850 [2024-11-20 08:48:46.736246] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2154524 ] 00:04:30.850 [2024-11-20 08:48:46.811611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:30.850 [2024-11-20 08:48:46.854299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.110 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.110 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:31.110 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2154533 00:04:31.110 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2154533 /var/tmp/spdk2.sock 00:04:31.110 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:04:31.110 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:31.110 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2154533 /var/tmp/spdk2.sock 00:04:31.110 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:31.110 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:31.110 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:31.110 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:31.110 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2154533 /var/tmp/spdk2.sock 00:04:31.110 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 2154533 ']' 00:04:31.110 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:31.110 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:31.110 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:31.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:31.110 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:31.110 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:31.110 [2024-11-20 08:48:47.118878] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:04:31.110 [2024-11-20 08:48:47.118928] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2154533 ] 00:04:31.370 [2024-11-20 08:48:47.204345] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2154524 has claimed it. 00:04:31.370 [2024-11-20 08:48:47.204377] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:31.937 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2154533) - No such process 00:04:31.937 ERROR: process (pid: 2154533) is no longer running 00:04:31.937 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.937 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:31.937 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:31.937 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:31.937 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:31.937 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:31.937 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2154524 00:04:31.937 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2154524 00:04:31.937 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:04:31.937 lslocks: write error 00:04:31.937 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2154524 00:04:31.937 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 2154524 ']' 00:04:31.937 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 2154524 00:04:31.937 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:04:31.937 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:31.937 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2154524 00:04:32.197 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:32.197 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:32.197 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2154524' 00:04:32.197 killing process with pid 2154524 00:04:32.197 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 2154524 00:04:32.197 08:48:47 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 2154524 00:04:32.457 00:04:32.457 real 0m1.589s 00:04:32.457 user 0m1.699s 00:04:32.457 sys 0m0.527s 00:04:32.457 08:48:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:32.457 08:48:48 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:32.457 ************************************ 00:04:32.457 END TEST locking_app_on_locked_coremask 00:04:32.457 ************************************ 00:04:32.457 08:48:48 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:04:32.457 08:48:48 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:32.457 08:48:48 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:32.457 08:48:48 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:32.457 ************************************ 00:04:32.457 START TEST locking_overlapped_coremask 00:04:32.457 ************************************ 00:04:32.457 08:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:04:32.457 08:48:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2154791 00:04:32.457 08:48:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2154791 /var/tmp/spdk.sock 00:04:32.457 08:48:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:04:32.457 08:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2154791 ']' 00:04:32.457 08:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:32.457 08:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.457 08:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:32.457 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:32.457 08:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.457 08:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:32.457 [2024-11-20 08:48:48.395048] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:04:32.457 [2024-11-20 08:48:48.395092] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2154791 ] 00:04:32.457 [2024-11-20 08:48:48.470052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:32.716 [2024-11-20 08:48:48.511418] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:32.716 [2024-11-20 08:48:48.511513] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:32.716 [2024-11-20 08:48:48.511514] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:32.716 08:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:32.716 08:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:04:32.716 08:48:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2154814 00:04:32.716 08:48:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2154814 /var/tmp/spdk2.sock 00:04:32.716 08:48:48 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:04:32.716 08:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:04:32.716 08:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 2154814 /var/tmp/spdk2.sock 00:04:32.716 08:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:04:32.716 08:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:32.716 08:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:04:32.716 08:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:32.716 08:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 2154814 /var/tmp/spdk2.sock 00:04:32.716 08:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 2154814 ']' 00:04:32.716 08:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:32.716 08:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:32.716 08:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:32.716 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:32.716 08:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:32.716 08:48:48 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:32.974 [2024-11-20 08:48:48.783835] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:04:32.974 [2024-11-20 08:48:48.783886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2154814 ] 00:04:32.974 [2024-11-20 08:48:48.876113] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2154791 has claimed it. 00:04:32.974 [2024-11-20 08:48:48.876152] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:04:33.541 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (2154814) - No such process 00:04:33.542 ERROR: process (pid: 2154814) is no longer running 00:04:33.542 08:48:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:33.542 08:48:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:04:33.542 08:48:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:04:33.542 08:48:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:33.542 08:48:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:33.542 08:48:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:33.542 08:48:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:04:33.542 08:48:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:33.542 08:48:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:33.542 08:48:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:33.542 08:48:49 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2154791 00:04:33.542 08:48:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 2154791 ']' 00:04:33.542 08:48:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 2154791 00:04:33.542 08:48:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:04:33.542 08:48:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:33.542 08:48:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2154791 00:04:33.542 08:48:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:33.542 08:48:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:33.542 08:48:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2154791' 00:04:33.542 killing process with pid 2154791 00:04:33.542 08:48:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 2154791 00:04:33.542 08:48:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 2154791 00:04:33.801 00:04:33.801 real 0m1.454s 00:04:33.801 user 0m4.007s 00:04:33.801 sys 0m0.402s 00:04:33.801 08:48:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.801 08:48:49 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:04:33.801 ************************************ 00:04:33.801 END TEST locking_overlapped_coremask 00:04:33.801 ************************************ 00:04:33.801 08:48:49 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:04:33.801 08:48:49 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.801 08:48:49 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.801 08:48:49 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:34.060 ************************************ 00:04:34.060 START TEST locking_overlapped_coremask_via_rpc 00:04:34.060 ************************************ 00:04:34.060 08:48:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:04:34.060 08:48:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2155055 00:04:34.060 08:48:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2155055 /var/tmp/spdk.sock 00:04:34.060 08:48:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:04:34.060 08:48:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2155055 ']' 00:04:34.060 08:48:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:34.060 08:48:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.060 08:48:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:34.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:34.060 08:48:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.060 08:48:49 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.060 [2024-11-20 08:48:49.921145] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:04:34.060 [2024-11-20 08:48:49.921203] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2155055 ] 00:04:34.060 [2024-11-20 08:48:49.996774] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:34.060 [2024-11-20 08:48:49.996798] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:34.060 [2024-11-20 08:48:50.046775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:34.060 [2024-11-20 08:48:50.047566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:34.060 [2024-11-20 08:48:50.047567] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.995 08:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.995 08:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:34.995 08:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2155284 00:04:34.995 08:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2155284 /var/tmp/spdk2.sock 00:04:34.995 08:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:04:34.995 08:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2155284 ']' 00:04:34.995 08:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:34.995 08:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:34.995 08:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:34.995 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:34.995 08:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:34.995 08:48:50 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:34.995 [2024-11-20 08:48:50.803821] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:04:34.995 [2024-11-20 08:48:50.803873] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2155284 ] 00:04:34.995 [2024-11-20 08:48:50.896022] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:04:34.995 [2024-11-20 08:48:50.896051] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:34.995 [2024-11-20 08:48:50.983880] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:34.995 [2024-11-20 08:48:50.986998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:34.995 [2024-11-20 08:48:50.986999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.931 [2024-11-20 08:48:51.657027] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2155055 has claimed it. 00:04:35.931 request: 00:04:35.931 { 00:04:35.931 "method": "framework_enable_cpumask_locks", 00:04:35.931 "req_id": 1 00:04:35.931 } 00:04:35.931 Got JSON-RPC error response 00:04:35.931 response: 00:04:35.931 { 00:04:35.931 "code": -32603, 00:04:35.931 "message": "Failed to claim CPU core: 2" 00:04:35.931 } 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2155055 /var/tmp/spdk.sock 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2155055 ']' 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:35.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2155284 /var/tmp/spdk2.sock 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 2155284 ']' 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:04:35.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:35.931 08:48:51 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.190 08:48:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:36.190 08:48:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:36.190 08:48:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:04:36.190 08:48:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:04:36.190 08:48:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:04:36.190 08:48:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:04:36.190 00:04:36.190 real 0m2.207s 00:04:36.190 user 0m0.974s 00:04:36.190 sys 0m0.172s 00:04:36.190 08:48:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.190 08:48:52 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.190 ************************************ 00:04:36.190 END TEST locking_overlapped_coremask_via_rpc 00:04:36.190 ************************************ 00:04:36.190 08:48:52 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:04:36.190 08:48:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2155055 ]] 00:04:36.190 08:48:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2155055 00:04:36.190 08:48:52 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2155055 ']' 00:04:36.190 08:48:52 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2155055 00:04:36.190 08:48:52 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:36.190 08:48:52 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.190 08:48:52 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2155055 00:04:36.190 08:48:52 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:36.190 08:48:52 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:36.190 08:48:52 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2155055' 00:04:36.190 killing process with pid 2155055 00:04:36.190 08:48:52 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2155055 00:04:36.190 08:48:52 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2155055 00:04:36.448 08:48:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2155284 ]] 00:04:36.448 08:48:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2155284 00:04:36.448 08:48:52 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2155284 ']' 00:04:36.448 08:48:52 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2155284 00:04:36.448 08:48:52 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:04:36.448 08:48:52 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:36.448 08:48:52 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2155284 00:04:36.707 08:48:52 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:36.707 08:48:52 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:36.707 08:48:52 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2155284' 00:04:36.707 killing process with pid 2155284 00:04:36.707 08:48:52 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 2155284 00:04:36.707 08:48:52 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 2155284 00:04:36.966 08:48:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:36.966 08:48:52 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:04:36.966 08:48:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2155055 ]] 00:04:36.966 08:48:52 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2155055 00:04:36.966 08:48:52 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2155055 ']' 00:04:36.966 08:48:52 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2155055 00:04:36.966 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2155055) - No such process 00:04:36.966 08:48:52 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2155055 is not found' 00:04:36.966 Process with pid 2155055 is not found 00:04:36.966 08:48:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2155284 ]] 00:04:36.966 08:48:52 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2155284 00:04:36.967 08:48:52 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 2155284 ']' 00:04:36.967 08:48:52 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 2155284 00:04:36.967 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2155284) - No such process 00:04:36.967 08:48:52 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 2155284 is not found' 00:04:36.967 Process with pid 2155284 is not found 00:04:36.967 08:48:52 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:04:36.967 00:04:36.967 real 0m14.168s 00:04:36.967 user 0m25.653s 00:04:36.967 sys 0m4.767s 00:04:36.967 08:48:52 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.967 08:48:52 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:04:36.967 ************************************ 00:04:36.967 END TEST cpu_locks 00:04:36.967 ************************************ 00:04:36.967 00:04:36.967 real 0m39.238s 00:04:36.967 user 1m16.264s 00:04:36.967 sys 0m8.277s 00:04:36.967 08:48:52 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.967 08:48:52 event -- common/autotest_common.sh@10 -- # set +x 00:04:36.967 ************************************ 00:04:36.967 END TEST event 00:04:36.967 ************************************ 00:04:36.967 08:48:52 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:36.967 08:48:52 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.967 08:48:52 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.967 08:48:52 -- common/autotest_common.sh@10 -- # set +x 00:04:36.967 ************************************ 00:04:36.967 START TEST thread 00:04:36.967 ************************************ 00:04:36.967 08:48:52 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:04:37.226 * Looking for test storage... 00:04:37.226 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:04:37.226 08:48:53 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:37.226 08:48:53 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:04:37.226 08:48:53 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:37.226 08:48:53 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:37.226 08:48:53 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:37.226 08:48:53 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:37.226 08:48:53 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:37.226 08:48:53 thread -- scripts/common.sh@336 -- # IFS=.-: 00:04:37.226 08:48:53 thread -- scripts/common.sh@336 -- # read -ra ver1 00:04:37.226 08:48:53 thread -- scripts/common.sh@337 -- # IFS=.-: 00:04:37.226 08:48:53 thread -- scripts/common.sh@337 -- # read -ra ver2 00:04:37.226 08:48:53 thread -- scripts/common.sh@338 -- # local 'op=<' 00:04:37.226 08:48:53 thread -- scripts/common.sh@340 -- # ver1_l=2 00:04:37.226 08:48:53 thread -- scripts/common.sh@341 -- # ver2_l=1 00:04:37.226 08:48:53 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:37.226 08:48:53 thread -- scripts/common.sh@344 -- # case "$op" in 00:04:37.226 08:48:53 thread -- scripts/common.sh@345 -- # : 1 00:04:37.226 08:48:53 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:37.226 08:48:53 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:37.226 08:48:53 thread -- scripts/common.sh@365 -- # decimal 1 00:04:37.226 08:48:53 thread -- scripts/common.sh@353 -- # local d=1 00:04:37.226 08:48:53 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:37.226 08:48:53 thread -- scripts/common.sh@355 -- # echo 1 00:04:37.226 08:48:53 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:04:37.226 08:48:53 thread -- scripts/common.sh@366 -- # decimal 2 00:04:37.226 08:48:53 thread -- scripts/common.sh@353 -- # local d=2 00:04:37.226 08:48:53 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:37.226 08:48:53 thread -- scripts/common.sh@355 -- # echo 2 00:04:37.226 08:48:53 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:04:37.226 08:48:53 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:37.226 08:48:53 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:37.226 08:48:53 thread -- scripts/common.sh@368 -- # return 0 00:04:37.226 08:48:53 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:37.226 08:48:53 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:37.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.226 --rc genhtml_branch_coverage=1 00:04:37.226 --rc genhtml_function_coverage=1 00:04:37.226 --rc genhtml_legend=1 00:04:37.226 --rc geninfo_all_blocks=1 00:04:37.226 --rc geninfo_unexecuted_blocks=1 00:04:37.226 00:04:37.226 ' 00:04:37.226 08:48:53 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:37.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.226 --rc genhtml_branch_coverage=1 00:04:37.226 --rc genhtml_function_coverage=1 00:04:37.226 --rc genhtml_legend=1 00:04:37.226 --rc geninfo_all_blocks=1 00:04:37.226 --rc geninfo_unexecuted_blocks=1 00:04:37.226 00:04:37.226 ' 00:04:37.226 08:48:53 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:37.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.226 --rc genhtml_branch_coverage=1 00:04:37.226 --rc genhtml_function_coverage=1 00:04:37.226 --rc genhtml_legend=1 00:04:37.226 --rc geninfo_all_blocks=1 00:04:37.226 --rc geninfo_unexecuted_blocks=1 00:04:37.226 00:04:37.226 ' 00:04:37.226 08:48:53 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:37.226 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:37.226 --rc genhtml_branch_coverage=1 00:04:37.226 --rc genhtml_function_coverage=1 00:04:37.226 --rc genhtml_legend=1 00:04:37.226 --rc geninfo_all_blocks=1 00:04:37.226 --rc geninfo_unexecuted_blocks=1 00:04:37.226 00:04:37.226 ' 00:04:37.226 08:48:53 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:37.226 08:48:53 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:37.226 08:48:53 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:37.226 08:48:53 thread -- common/autotest_common.sh@10 -- # set +x 00:04:37.226 ************************************ 00:04:37.226 START TEST thread_poller_perf 00:04:37.226 ************************************ 00:04:37.226 08:48:53 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:04:37.226 [2024-11-20 08:48:53.173692] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:04:37.226 [2024-11-20 08:48:53.173761] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2155837 ] 00:04:37.226 [2024-11-20 08:48:53.252844] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.485 [2024-11-20 08:48:53.293997] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.485 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:04:38.420 [2024-11-20T07:48:54.461Z] ====================================== 00:04:38.420 [2024-11-20T07:48:54.461Z] busy:2305043892 (cyc) 00:04:38.420 [2024-11-20T07:48:54.461Z] total_run_count: 409000 00:04:38.420 [2024-11-20T07:48:54.461Z] tsc_hz: 2300000000 (cyc) 00:04:38.420 [2024-11-20T07:48:54.462Z] ====================================== 00:04:38.421 [2024-11-20T07:48:54.462Z] poller_cost: 5635 (cyc), 2450 (nsec) 00:04:38.421 00:04:38.421 real 0m1.188s 00:04:38.421 user 0m1.108s 00:04:38.421 sys 0m0.075s 00:04:38.421 08:48:54 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:38.421 08:48:54 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:38.421 ************************************ 00:04:38.421 END TEST thread_poller_perf 00:04:38.421 ************************************ 00:04:38.421 08:48:54 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:38.421 08:48:54 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:04:38.421 08:48:54 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:38.421 08:48:54 thread -- common/autotest_common.sh@10 -- # set +x 00:04:38.421 ************************************ 00:04:38.421 START TEST thread_poller_perf 00:04:38.421 ************************************ 00:04:38.421 08:48:54 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:04:38.421 [2024-11-20 08:48:54.431172] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:04:38.421 [2024-11-20 08:48:54.431245] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2156009 ] 00:04:38.679 [2024-11-20 08:48:54.507329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:38.679 [2024-11-20 08:48:54.548323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:38.679 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:04:39.615 [2024-11-20T07:48:55.656Z] ====================================== 00:04:39.615 [2024-11-20T07:48:55.656Z] busy:2301562018 (cyc) 00:04:39.615 [2024-11-20T07:48:55.656Z] total_run_count: 5429000 00:04:39.615 [2024-11-20T07:48:55.656Z] tsc_hz: 2300000000 (cyc) 00:04:39.615 [2024-11-20T07:48:55.656Z] ====================================== 00:04:39.615 [2024-11-20T07:48:55.656Z] poller_cost: 423 (cyc), 183 (nsec) 00:04:39.615 00:04:39.615 real 0m1.176s 00:04:39.615 user 0m1.103s 00:04:39.615 sys 0m0.070s 00:04:39.615 08:48:55 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.615 08:48:55 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:04:39.615 ************************************ 00:04:39.615 END TEST thread_poller_perf 00:04:39.615 ************************************ 00:04:39.615 08:48:55 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:04:39.615 00:04:39.615 real 0m2.681s 00:04:39.615 user 0m2.371s 00:04:39.615 sys 0m0.323s 00:04:39.615 08:48:55 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.615 08:48:55 thread -- common/autotest_common.sh@10 -- # set +x 00:04:39.615 ************************************ 00:04:39.615 END TEST thread 00:04:39.615 ************************************ 00:04:39.874 08:48:55 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:04:39.874 08:48:55 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:39.874 08:48:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.874 08:48:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.874 08:48:55 -- common/autotest_common.sh@10 -- # set +x 00:04:39.874 ************************************ 00:04:39.874 START TEST app_cmdline 00:04:39.874 ************************************ 00:04:39.874 08:48:55 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:04:39.874 * Looking for test storage... 00:04:39.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:39.874 08:48:55 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:39.874 08:48:55 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:04:39.874 08:48:55 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:39.874 08:48:55 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:39.874 08:48:55 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:39.874 08:48:55 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:39.874 08:48:55 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:39.874 08:48:55 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:04:39.874 08:48:55 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:04:39.874 08:48:55 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:04:39.874 08:48:55 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:04:39.874 08:48:55 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:04:39.874 08:48:55 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:04:39.874 08:48:55 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:04:39.874 08:48:55 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:39.874 08:48:55 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:04:39.874 08:48:55 app_cmdline -- scripts/common.sh@345 -- # : 1 00:04:39.874 08:48:55 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:39.875 08:48:55 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:39.875 08:48:55 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:04:39.875 08:48:55 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:04:39.875 08:48:55 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:39.875 08:48:55 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:04:39.875 08:48:55 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:04:39.875 08:48:55 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:04:39.875 08:48:55 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:04:39.875 08:48:55 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:39.875 08:48:55 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:04:39.875 08:48:55 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:04:39.875 08:48:55 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:39.875 08:48:55 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:39.875 08:48:55 app_cmdline -- scripts/common.sh@368 -- # return 0 00:04:39.875 08:48:55 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:39.875 08:48:55 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:39.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.875 --rc genhtml_branch_coverage=1 00:04:39.875 --rc genhtml_function_coverage=1 00:04:39.875 --rc genhtml_legend=1 00:04:39.875 --rc geninfo_all_blocks=1 00:04:39.875 --rc geninfo_unexecuted_blocks=1 00:04:39.875 00:04:39.875 ' 00:04:39.875 08:48:55 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:39.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.875 --rc genhtml_branch_coverage=1 00:04:39.875 --rc genhtml_function_coverage=1 00:04:39.875 --rc genhtml_legend=1 00:04:39.875 --rc geninfo_all_blocks=1 00:04:39.875 --rc geninfo_unexecuted_blocks=1 00:04:39.875 00:04:39.875 ' 00:04:39.875 08:48:55 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:39.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.875 --rc genhtml_branch_coverage=1 00:04:39.875 --rc genhtml_function_coverage=1 00:04:39.875 --rc genhtml_legend=1 00:04:39.875 --rc geninfo_all_blocks=1 00:04:39.875 --rc geninfo_unexecuted_blocks=1 00:04:39.875 00:04:39.875 ' 00:04:39.875 08:48:55 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:39.875 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:39.875 --rc genhtml_branch_coverage=1 00:04:39.875 --rc genhtml_function_coverage=1 00:04:39.875 --rc genhtml_legend=1 00:04:39.875 --rc geninfo_all_blocks=1 00:04:39.875 --rc geninfo_unexecuted_blocks=1 00:04:39.875 00:04:39.875 ' 00:04:39.875 08:48:55 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:04:39.875 08:48:55 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2156348 00:04:39.875 08:48:55 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:04:39.875 08:48:55 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2156348 00:04:39.875 08:48:55 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 2156348 ']' 00:04:39.875 08:48:55 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:39.875 08:48:55 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:39.875 08:48:55 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:39.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:39.875 08:48:55 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:39.875 08:48:55 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:40.134 [2024-11-20 08:48:55.927294] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:04:40.134 [2024-11-20 08:48:55.927345] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2156348 ] 00:04:40.134 [2024-11-20 08:48:56.002646] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.134 [2024-11-20 08:48:56.045123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.392 08:48:56 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.392 08:48:56 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:04:40.392 08:48:56 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:04:40.652 { 00:04:40.652 "version": "SPDK v25.01-pre git sha1 1c7c7c64f", 00:04:40.652 "fields": { 00:04:40.652 "major": 25, 00:04:40.652 "minor": 1, 00:04:40.652 "patch": 0, 00:04:40.652 "suffix": "-pre", 00:04:40.652 "commit": "1c7c7c64f" 00:04:40.652 } 00:04:40.652 } 00:04:40.652 08:48:56 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:04:40.652 08:48:56 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:04:40.652 08:48:56 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:04:40.652 08:48:56 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:04:40.652 08:48:56 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:04:40.652 08:48:56 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:04:40.652 08:48:56 app_cmdline -- app/cmdline.sh@26 -- # sort 00:04:40.652 08:48:56 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:40.652 08:48:56 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:40.652 08:48:56 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:40.652 08:48:56 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:04:40.652 08:48:56 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:04:40.652 08:48:56 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:40.652 08:48:56 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:04:40.652 08:48:56 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:40.652 08:48:56 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:40.652 08:48:56 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.652 08:48:56 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:40.652 08:48:56 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.652 08:48:56 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:40.652 08:48:56 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:40.652 08:48:56 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:40.652 08:48:56 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:04:40.652 08:48:56 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:04:40.652 request: 00:04:40.652 { 00:04:40.652 "method": "env_dpdk_get_mem_stats", 00:04:40.652 "req_id": 1 00:04:40.652 } 00:04:40.652 Got JSON-RPC error response 00:04:40.652 response: 00:04:40.652 { 00:04:40.652 "code": -32601, 00:04:40.652 "message": "Method not found" 00:04:40.652 } 00:04:40.652 08:48:56 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:04:40.652 08:48:56 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:40.652 08:48:56 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:40.652 08:48:56 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:40.652 08:48:56 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2156348 00:04:40.652 08:48:56 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 2156348 ']' 00:04:40.652 08:48:56 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 2156348 00:04:40.652 08:48:56 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:04:40.652 08:48:56 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:40.652 08:48:56 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2156348 00:04:40.911 08:48:56 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:40.911 08:48:56 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:40.911 08:48:56 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2156348' 00:04:40.911 killing process with pid 2156348 00:04:40.911 08:48:56 app_cmdline -- common/autotest_common.sh@973 -- # kill 2156348 00:04:40.911 08:48:56 app_cmdline -- common/autotest_common.sh@978 -- # wait 2156348 00:04:41.172 00:04:41.172 real 0m1.336s 00:04:41.172 user 0m1.542s 00:04:41.172 sys 0m0.455s 00:04:41.172 08:48:57 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.172 08:48:57 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:04:41.172 ************************************ 00:04:41.172 END TEST app_cmdline 00:04:41.172 ************************************ 00:04:41.172 08:48:57 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:41.172 08:48:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:41.172 08:48:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.172 08:48:57 -- common/autotest_common.sh@10 -- # set +x 00:04:41.172 ************************************ 00:04:41.172 START TEST version 00:04:41.172 ************************************ 00:04:41.172 08:48:57 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:04:41.172 * Looking for test storage... 00:04:41.172 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:04:41.172 08:48:57 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:41.172 08:48:57 version -- common/autotest_common.sh@1693 -- # lcov --version 00:04:41.172 08:48:57 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:41.432 08:48:57 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:41.432 08:48:57 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.432 08:48:57 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.432 08:48:57 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.432 08:48:57 version -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.432 08:48:57 version -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.432 08:48:57 version -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.432 08:48:57 version -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.432 08:48:57 version -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.432 08:48:57 version -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.432 08:48:57 version -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.432 08:48:57 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.432 08:48:57 version -- scripts/common.sh@344 -- # case "$op" in 00:04:41.432 08:48:57 version -- scripts/common.sh@345 -- # : 1 00:04:41.432 08:48:57 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.432 08:48:57 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.432 08:48:57 version -- scripts/common.sh@365 -- # decimal 1 00:04:41.432 08:48:57 version -- scripts/common.sh@353 -- # local d=1 00:04:41.432 08:48:57 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.432 08:48:57 version -- scripts/common.sh@355 -- # echo 1 00:04:41.432 08:48:57 version -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.432 08:48:57 version -- scripts/common.sh@366 -- # decimal 2 00:04:41.432 08:48:57 version -- scripts/common.sh@353 -- # local d=2 00:04:41.432 08:48:57 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.432 08:48:57 version -- scripts/common.sh@355 -- # echo 2 00:04:41.432 08:48:57 version -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.432 08:48:57 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.432 08:48:57 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.432 08:48:57 version -- scripts/common.sh@368 -- # return 0 00:04:41.432 08:48:57 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.432 08:48:57 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:41.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.432 --rc genhtml_branch_coverage=1 00:04:41.432 --rc genhtml_function_coverage=1 00:04:41.432 --rc genhtml_legend=1 00:04:41.432 --rc geninfo_all_blocks=1 00:04:41.432 --rc geninfo_unexecuted_blocks=1 00:04:41.432 00:04:41.432 ' 00:04:41.432 08:48:57 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:41.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.432 --rc genhtml_branch_coverage=1 00:04:41.432 --rc genhtml_function_coverage=1 00:04:41.432 --rc genhtml_legend=1 00:04:41.432 --rc geninfo_all_blocks=1 00:04:41.432 --rc geninfo_unexecuted_blocks=1 00:04:41.432 00:04:41.432 ' 00:04:41.432 08:48:57 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:41.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.432 --rc genhtml_branch_coverage=1 00:04:41.432 --rc genhtml_function_coverage=1 00:04:41.432 --rc genhtml_legend=1 00:04:41.432 --rc geninfo_all_blocks=1 00:04:41.432 --rc geninfo_unexecuted_blocks=1 00:04:41.432 00:04:41.432 ' 00:04:41.432 08:48:57 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:41.432 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.432 --rc genhtml_branch_coverage=1 00:04:41.432 --rc genhtml_function_coverage=1 00:04:41.432 --rc genhtml_legend=1 00:04:41.432 --rc geninfo_all_blocks=1 00:04:41.432 --rc geninfo_unexecuted_blocks=1 00:04:41.432 00:04:41.432 ' 00:04:41.432 08:48:57 version -- app/version.sh@17 -- # get_header_version major 00:04:41.432 08:48:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:41.432 08:48:57 version -- app/version.sh@14 -- # cut -f2 00:04:41.432 08:48:57 version -- app/version.sh@14 -- # tr -d '"' 00:04:41.432 08:48:57 version -- app/version.sh@17 -- # major=25 00:04:41.432 08:48:57 version -- app/version.sh@18 -- # get_header_version minor 00:04:41.432 08:48:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:41.432 08:48:57 version -- app/version.sh@14 -- # cut -f2 00:04:41.432 08:48:57 version -- app/version.sh@14 -- # tr -d '"' 00:04:41.432 08:48:57 version -- app/version.sh@18 -- # minor=1 00:04:41.432 08:48:57 version -- app/version.sh@19 -- # get_header_version patch 00:04:41.432 08:48:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:41.432 08:48:57 version -- app/version.sh@14 -- # cut -f2 00:04:41.432 08:48:57 version -- app/version.sh@14 -- # tr -d '"' 00:04:41.432 08:48:57 version -- app/version.sh@19 -- # patch=0 00:04:41.432 08:48:57 version -- app/version.sh@20 -- # get_header_version suffix 00:04:41.432 08:48:57 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:04:41.432 08:48:57 version -- app/version.sh@14 -- # cut -f2 00:04:41.432 08:48:57 version -- app/version.sh@14 -- # tr -d '"' 00:04:41.432 08:48:57 version -- app/version.sh@20 -- # suffix=-pre 00:04:41.432 08:48:57 version -- app/version.sh@22 -- # version=25.1 00:04:41.432 08:48:57 version -- app/version.sh@25 -- # (( patch != 0 )) 00:04:41.432 08:48:57 version -- app/version.sh@28 -- # version=25.1rc0 00:04:41.432 08:48:57 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:04:41.433 08:48:57 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:04:41.433 08:48:57 version -- app/version.sh@30 -- # py_version=25.1rc0 00:04:41.433 08:48:57 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:04:41.433 00:04:41.433 real 0m0.248s 00:04:41.433 user 0m0.153s 00:04:41.433 sys 0m0.138s 00:04:41.433 08:48:57 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:41.433 08:48:57 version -- common/autotest_common.sh@10 -- # set +x 00:04:41.433 ************************************ 00:04:41.433 END TEST version 00:04:41.433 ************************************ 00:04:41.433 08:48:57 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:04:41.433 08:48:57 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:04:41.433 08:48:57 -- spdk/autotest.sh@194 -- # uname -s 00:04:41.433 08:48:57 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:04:41.433 08:48:57 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:41.433 08:48:57 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:04:41.433 08:48:57 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:04:41.433 08:48:57 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:04:41.433 08:48:57 -- spdk/autotest.sh@260 -- # timing_exit lib 00:04:41.433 08:48:57 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:41.433 08:48:57 -- common/autotest_common.sh@10 -- # set +x 00:04:41.433 08:48:57 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:04:41.433 08:48:57 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:04:41.433 08:48:57 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:04:41.433 08:48:57 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:04:41.433 08:48:57 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:04:41.433 08:48:57 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:04:41.433 08:48:57 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:41.433 08:48:57 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:41.433 08:48:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.433 08:48:57 -- common/autotest_common.sh@10 -- # set +x 00:04:41.433 ************************************ 00:04:41.433 START TEST nvmf_tcp 00:04:41.433 ************************************ 00:04:41.433 08:48:57 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:04:41.692 * Looking for test storage... 00:04:41.692 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:41.692 08:48:57 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:41.692 08:48:57 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:41.692 08:48:57 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:41.692 08:48:57 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:41.692 08:48:57 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.692 08:48:57 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.692 08:48:57 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.693 08:48:57 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.693 08:48:57 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.693 08:48:57 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.693 08:48:57 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.693 08:48:57 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.693 08:48:57 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.693 08:48:57 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.693 08:48:57 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.693 08:48:57 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:41.693 08:48:57 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:04:41.693 08:48:57 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.693 08:48:57 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.693 08:48:57 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:41.693 08:48:57 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:04:41.693 08:48:57 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.693 08:48:57 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:04:41.693 08:48:57 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.693 08:48:57 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:41.693 08:48:57 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:04:41.693 08:48:57 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.693 08:48:57 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:04:41.693 08:48:57 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.693 08:48:57 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.693 08:48:57 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.693 08:48:57 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:04:41.693 08:48:57 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.693 08:48:57 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:41.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.693 --rc genhtml_branch_coverage=1 00:04:41.693 --rc genhtml_function_coverage=1 00:04:41.693 --rc genhtml_legend=1 00:04:41.693 --rc geninfo_all_blocks=1 00:04:41.693 --rc geninfo_unexecuted_blocks=1 00:04:41.693 00:04:41.693 ' 00:04:41.693 08:48:57 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:41.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.693 --rc genhtml_branch_coverage=1 00:04:41.693 --rc genhtml_function_coverage=1 00:04:41.693 --rc genhtml_legend=1 00:04:41.693 --rc geninfo_all_blocks=1 00:04:41.693 --rc geninfo_unexecuted_blocks=1 00:04:41.693 00:04:41.693 ' 00:04:41.693 08:48:57 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:41.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.693 --rc genhtml_branch_coverage=1 00:04:41.693 --rc genhtml_function_coverage=1 00:04:41.693 --rc genhtml_legend=1 00:04:41.693 --rc geninfo_all_blocks=1 00:04:41.693 --rc geninfo_unexecuted_blocks=1 00:04:41.693 00:04:41.693 ' 00:04:41.693 08:48:57 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:41.693 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.693 --rc genhtml_branch_coverage=1 00:04:41.693 --rc genhtml_function_coverage=1 00:04:41.693 --rc genhtml_legend=1 00:04:41.693 --rc geninfo_all_blocks=1 00:04:41.693 --rc geninfo_unexecuted_blocks=1 00:04:41.693 00:04:41.693 ' 00:04:41.693 08:48:57 nvmf_tcp -- nvmf/nvmf.sh@10 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:41.693 08:48:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:41.693 08:48:57 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.693 08:48:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:41.693 ************************************ 00:04:41.693 START TEST nvmf_target_core 00:04:41.693 ************************************ 00:04:41.693 08:48:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:04:41.953 * Looking for test storage... 00:04:41.953 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:41.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.953 --rc genhtml_branch_coverage=1 00:04:41.953 --rc genhtml_function_coverage=1 00:04:41.953 --rc genhtml_legend=1 00:04:41.953 --rc geninfo_all_blocks=1 00:04:41.953 --rc geninfo_unexecuted_blocks=1 00:04:41.953 00:04:41.953 ' 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:41.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.953 --rc genhtml_branch_coverage=1 00:04:41.953 --rc genhtml_function_coverage=1 00:04:41.953 --rc genhtml_legend=1 00:04:41.953 --rc geninfo_all_blocks=1 00:04:41.953 --rc geninfo_unexecuted_blocks=1 00:04:41.953 00:04:41.953 ' 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:41.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.953 --rc genhtml_branch_coverage=1 00:04:41.953 --rc genhtml_function_coverage=1 00:04:41.953 --rc genhtml_legend=1 00:04:41.953 --rc geninfo_all_blocks=1 00:04:41.953 --rc geninfo_unexecuted_blocks=1 00:04:41.953 00:04:41.953 ' 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:41.953 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:41.953 --rc genhtml_branch_coverage=1 00:04:41.953 --rc genhtml_function_coverage=1 00:04:41.953 --rc genhtml_legend=1 00:04:41.953 --rc geninfo_all_blocks=1 00:04:41.953 --rc geninfo_unexecuted_blocks=1 00:04:41.953 00:04:41.953 ' 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:41.953 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@50 -- # : 0 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:04:41.954 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@54 -- # have_pci_nics=0 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@13 -- # TEST_ARGS=("$@") 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@15 -- # [[ 0 -eq 0 ]] 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:41.954 ************************************ 00:04:41.954 START TEST nvmf_abort 00:04:41.954 ************************************ 00:04:41.954 08:48:57 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:04:42.214 * Looking for test storage... 00:04:42.214 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:42.214 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:42.214 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:04:42.214 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:42.214 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:42.214 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:42.214 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:42.214 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:42.214 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:04:42.214 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:04:42.214 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:04:42.214 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:04:42.214 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:04:42.214 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:04:42.214 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:04:42.214 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:42.214 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:04:42.214 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:04:42.214 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:42.214 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:42.214 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:42.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.215 --rc genhtml_branch_coverage=1 00:04:42.215 --rc genhtml_function_coverage=1 00:04:42.215 --rc genhtml_legend=1 00:04:42.215 --rc geninfo_all_blocks=1 00:04:42.215 --rc geninfo_unexecuted_blocks=1 00:04:42.215 00:04:42.215 ' 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:42.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.215 --rc genhtml_branch_coverage=1 00:04:42.215 --rc genhtml_function_coverage=1 00:04:42.215 --rc genhtml_legend=1 00:04:42.215 --rc geninfo_all_blocks=1 00:04:42.215 --rc geninfo_unexecuted_blocks=1 00:04:42.215 00:04:42.215 ' 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:42.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.215 --rc genhtml_branch_coverage=1 00:04:42.215 --rc genhtml_function_coverage=1 00:04:42.215 --rc genhtml_legend=1 00:04:42.215 --rc geninfo_all_blocks=1 00:04:42.215 --rc geninfo_unexecuted_blocks=1 00:04:42.215 00:04:42.215 ' 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:42.215 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:42.215 --rc genhtml_branch_coverage=1 00:04:42.215 --rc genhtml_function_coverage=1 00:04:42.215 --rc genhtml_legend=1 00:04:42.215 --rc geninfo_all_blocks=1 00:04:42.215 --rc geninfo_unexecuted_blocks=1 00:04:42.215 00:04:42.215 ' 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@50 -- # : 0 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:04:42.215 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@54 -- # have_pci_nics=0 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:04:42.215 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:04:42.216 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:04:42.216 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:42.216 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # prepare_net_devs 00:04:42.216 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # local -g is_hw=no 00:04:42.216 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # remove_target_ns 00:04:42.216 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:04:42.216 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:04:42.216 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:04:42.216 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:04:42.216 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:04:42.216 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # xtrace_disable 00:04:42.216 08:48:58 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@131 -- # pci_devs=() 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@131 -- # local -a pci_devs 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@132 -- # pci_net_devs=() 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@133 -- # pci_drivers=() 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@133 -- # local -A pci_drivers 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@135 -- # net_devs=() 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@135 -- # local -ga net_devs 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@136 -- # e810=() 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@136 -- # local -ga e810 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@137 -- # x722=() 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@137 -- # local -ga x722 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@138 -- # mlx=() 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@138 -- # local -ga mlx 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:04:48.787 Found 0000:86:00.0 (0x8086 - 0x159b) 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:04:48.787 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:04:48.788 Found 0000:86:00.1 (0x8086 - 0x159b) 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:04:48.788 Found net devices under 0000:86:00.0: cvl_0_0 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:04:48.788 Found net devices under 0000:86:00.1: cvl_0_1 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # is_hw=yes 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@257 -- # create_target_ns 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@27 -- # local -gA dev_map 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@28 -- # local -g _dev 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@44 -- # ips=() 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772161 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:04:48.788 08:49:03 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:04:48.788 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:04:48.788 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:04:48.788 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:04:48.788 10.0.0.1 00:04:48.788 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:04:48.788 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:04:48.788 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:04:48.788 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:04:48.788 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:04:48.788 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772162 00:04:48.788 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:04:48.788 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:04:48.788 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:04:48.789 10.0.0.2 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@38 -- # ping_ips 1 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=initiator0 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:04:48.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:04:48.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.345 ms 00:04:48.789 00:04:48.789 --- 10.0.0.1 ping statistics --- 00:04:48.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:48.789 rtt min/avg/max/mdev = 0.345/0.345/0.345/0.000 ms 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev target0 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=target0 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:04:48.789 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:04:48.789 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.157 ms 00:04:48.789 00:04:48.789 --- 10.0.0.2 ping statistics --- 00:04:48.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:04:48.789 rtt min/avg/max/mdev = 0.157/0.157/0.157/0.000 ms 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair++ )) 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # return 0 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:04:48.789 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=initiator0 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=initiator1 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # return 1 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # dev= 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@169 -- # return 0 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev target0 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=target0 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev target1 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=target1 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # return 1 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # dev= 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@169 -- # return 0 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # nvmfpid=2160021 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # waitforlisten 2160021 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2160021 ']' 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.790 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:48.790 [2024-11-20 08:49:04.323109] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:04:48.790 [2024-11-20 08:49:04.323152] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:04:48.790 [2024-11-20 08:49:04.403193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:04:48.790 [2024-11-20 08:49:04.444515] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:04:48.790 [2024-11-20 08:49:04.444552] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:04:48.790 [2024-11-20 08:49:04.444560] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:48.790 [2024-11-20 08:49:04.444565] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:48.790 [2024-11-20 08:49:04.444570] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:04:48.790 [2024-11-20 08:49:04.445968] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:48.790 [2024-11-20 08:49:04.446043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.790 [2024-11-20 08:49:04.446044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:04:48.790 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:48.791 [2024-11-20 08:49:04.594032] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:48.791 Malloc0 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:48.791 Delay0 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:48.791 [2024-11-20 08:49:04.667347] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:48.791 08:49:04 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:04:48.791 [2024-11-20 08:49:04.763343] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:04:51.322 Initializing NVMe Controllers 00:04:51.322 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:04:51.322 controller IO queue size 128 less than required 00:04:51.322 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:04:51.322 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:04:51.322 Initialization complete. Launching workers. 00:04:51.322 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 127, failed: 36484 00:04:51.322 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36549, failed to submit 62 00:04:51.322 success 36488, unsuccessful 61, failed 0 00:04:51.322 08:49:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:04:51.322 08:49:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:51.322 08:49:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:51.322 08:49:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:51.322 08:49:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:04:51.322 08:49:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:04:51.322 08:49:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # nvmfcleanup 00:04:51.322 08:49:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@99 -- # sync 00:04:51.322 08:49:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:04:51.322 08:49:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # set +e 00:04:51.322 08:49:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # for i in {1..20} 00:04:51.322 08:49:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:04:51.322 rmmod nvme_tcp 00:04:51.322 rmmod nvme_fabrics 00:04:51.322 rmmod nvme_keyring 00:04:51.322 08:49:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:04:51.322 08:49:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # set -e 00:04:51.322 08:49:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # return 0 00:04:51.322 08:49:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # '[' -n 2160021 ']' 00:04:51.322 08:49:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@337 -- # killprocess 2160021 00:04:51.322 08:49:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2160021 ']' 00:04:51.322 08:49:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2160021 00:04:51.322 08:49:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:04:51.322 08:49:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.322 08:49:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2160021 00:04:51.322 08:49:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:04:51.322 08:49:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:04:51.323 08:49:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2160021' 00:04:51.323 killing process with pid 2160021 00:04:51.323 08:49:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2160021 00:04:51.323 08:49:06 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2160021 00:04:51.323 08:49:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:04:51.323 08:49:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # nvmf_fini 00:04:51.323 08:49:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@264 -- # local dev 00:04:51.323 08:49:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@267 -- # remove_target_ns 00:04:51.323 08:49:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:04:51.323 08:49:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:04:51.323 08:49:07 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@268 -- # delete_main_bridge 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@130 -- # return 0 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@41 -- # _dev=0 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@41 -- # dev_map=() 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@284 -- # iptr 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@542 -- # iptables-save 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@542 -- # iptables-restore 00:04:53.228 00:04:53.228 real 0m11.301s 00:04:53.228 user 0m11.538s 00:04:53.228 sys 0m5.496s 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:04:53.228 ************************************ 00:04:53.228 END TEST nvmf_abort 00:04:53.228 ************************************ 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@17 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.228 08:49:09 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:04:53.488 ************************************ 00:04:53.488 START TEST nvmf_ns_hotplug_stress 00:04:53.488 ************************************ 00:04:53.488 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:04:53.488 * Looking for test storage... 00:04:53.488 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:04:53.488 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:53.488 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:04:53.488 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:53.488 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:53.488 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.488 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.488 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.488 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.488 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.488 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.488 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.488 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.488 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.488 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.488 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.488 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:04:53.488 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:04:53.488 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.488 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.488 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:04:53.488 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:04:53.488 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.488 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:04:53.488 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.488 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:04:53.488 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:04:53.488 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.488 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:04:53.488 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.488 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:53.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.489 --rc genhtml_branch_coverage=1 00:04:53.489 --rc genhtml_function_coverage=1 00:04:53.489 --rc genhtml_legend=1 00:04:53.489 --rc geninfo_all_blocks=1 00:04:53.489 --rc geninfo_unexecuted_blocks=1 00:04:53.489 00:04:53.489 ' 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:53.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.489 --rc genhtml_branch_coverage=1 00:04:53.489 --rc genhtml_function_coverage=1 00:04:53.489 --rc genhtml_legend=1 00:04:53.489 --rc geninfo_all_blocks=1 00:04:53.489 --rc geninfo_unexecuted_blocks=1 00:04:53.489 00:04:53.489 ' 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:53.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.489 --rc genhtml_branch_coverage=1 00:04:53.489 --rc genhtml_function_coverage=1 00:04:53.489 --rc genhtml_legend=1 00:04:53.489 --rc geninfo_all_blocks=1 00:04:53.489 --rc geninfo_unexecuted_blocks=1 00:04:53.489 00:04:53.489 ' 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:53.489 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.489 --rc genhtml_branch_coverage=1 00:04:53.489 --rc genhtml_function_coverage=1 00:04:53.489 --rc genhtml_legend=1 00:04:53.489 --rc geninfo_all_blocks=1 00:04:53.489 --rc geninfo_unexecuted_blocks=1 00:04:53.489 00:04:53.489 ' 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@50 -- # : 0 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:04:53.489 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:04:53.489 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:04:53.490 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:04:53.490 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:04:53.490 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:04:53.490 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:04:53.490 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:04:53.490 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:04:53.490 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:04:53.490 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:04:53.490 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:04:53.490 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # xtrace_disable 00:04:53.490 08:49:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:00.062 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:00.062 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # pci_devs=() 00:05:00.062 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # local -a pci_devs 00:05:00.062 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # pci_net_devs=() 00:05:00.062 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:05:00.062 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # pci_drivers=() 00:05:00.062 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # local -A pci_drivers 00:05:00.062 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # net_devs=() 00:05:00.062 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # local -ga net_devs 00:05:00.062 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # e810=() 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # local -ga e810 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # x722=() 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # local -ga x722 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # mlx=() 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # local -ga mlx 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:00.063 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:00.063 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:00.063 Found net devices under 0000:86:00.0: cvl_0_0 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:00.063 Found net devices under 0000:86:00.1: cvl_0_1 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # is_hw=yes 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@257 -- # create_target_ns 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:05:00.063 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # ips=() 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:05:00.064 10.0.0.1 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:05:00.064 10.0.0.2 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@38 -- # ping_ips 1 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=initiator0 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:05:00.064 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:05:00.065 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:00.065 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.404 ms 00:05:00.065 00:05:00.065 --- 10.0.0.1 ping statistics --- 00:05:00.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:00.065 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:05:00.065 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:00.065 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:05:00.065 00:05:00.065 --- 10.0.0.2 ping statistics --- 00:05:00.065 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:00.065 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair++ )) 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # return 0 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=initiator0 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=initiator1 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # return 1 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev= 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@169 -- # return 0 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:05:00.065 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev target1 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=target1 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # return 1 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev= 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@169 -- # return 0 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # nvmfpid=2164146 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # waitforlisten 2164146 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2164146 ']' 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:00.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:00.066 [2024-11-20 08:49:15.693682] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:05:00.066 [2024-11-20 08:49:15.693735] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:00.066 [2024-11-20 08:49:15.775123] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:00.066 [2024-11-20 08:49:15.817080] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:00.066 [2024-11-20 08:49:15.817118] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:00.066 [2024-11-20 08:49:15.817125] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:00.066 [2024-11-20 08:49:15.817132] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:00.066 [2024-11-20 08:49:15.817137] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:00.066 [2024-11-20 08:49:15.818570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:00.066 [2024-11-20 08:49:15.818675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.066 [2024-11-20 08:49:15.818675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:00.066 08:49:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:00.323 [2024-11-20 08:49:16.119720] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:00.323 08:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:00.323 08:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:00.580 [2024-11-20 08:49:16.513137] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:00.580 08:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:00.838 08:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:01.095 Malloc0 00:05:01.095 08:49:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:01.352 Delay0 00:05:01.352 08:49:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:01.352 08:49:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:01.609 NULL1 00:05:01.609 08:49:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:01.866 08:49:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:01.866 08:49:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2164416 00:05:01.866 08:49:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2164416 00:05:01.866 08:49:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:03.238 Read completed with error (sct=0, sc=11) 00:05:03.238 08:49:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:03.238 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:03.238 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:03.238 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:03.238 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:03.238 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:03.238 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:03.238 08:49:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:03.238 08:49:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:03.494 true 00:05:03.494 08:49:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2164416 00:05:03.494 08:49:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:04.424 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:04.424 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:04.424 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:04.681 true 00:05:04.681 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2164416 00:05:04.681 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:04.938 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:05.196 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:05.196 08:49:20 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:05.196 true 00:05:05.196 08:49:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2164416 00:05:05.196 08:49:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:06.564 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:06.564 08:49:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:06.564 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:06.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:06.565 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:06.565 08:49:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:06.565 08:49:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:06.822 true 00:05:06.822 08:49:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2164416 00:05:06.822 08:49:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:06.822 08:49:22 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:07.079 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:07.079 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:07.336 true 00:05:07.336 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2164416 00:05:07.336 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:07.593 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:07.850 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:07.850 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:07.850 true 00:05:07.850 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2164416 00:05:07.850 08:49:23 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:08.107 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:08.364 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:08.364 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:08.620 true 00:05:08.620 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2164416 00:05:08.620 08:49:24 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:09.553 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.553 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:09.553 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.553 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:09.809 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:09.809 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:10.066 true 00:05:10.066 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2164416 00:05:10.066 08:49:25 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:10.066 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:10.323 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:10.323 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:10.581 true 00:05:10.581 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2164416 00:05:10.582 08:49:26 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:11.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:11.951 08:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:11.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:11.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:11.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:11.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:11.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:11.951 08:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:11.951 08:49:27 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:12.235 true 00:05:12.235 08:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2164416 00:05:12.235 08:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:13.210 08:49:28 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:13.210 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:13.210 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:13.466 true 00:05:13.466 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2164416 00:05:13.466 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:13.723 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:13.723 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:13.723 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:13.981 true 00:05:13.981 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2164416 00:05:13.981 08:49:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.352 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:15.352 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.353 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:15.353 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:15.353 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:15.610 true 00:05:15.610 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2164416 00:05:15.610 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:15.610 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:15.867 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:15.867 08:49:31 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:16.124 true 00:05:16.124 08:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2164416 00:05:16.124 08:49:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:17.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.495 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:17.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.495 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:17.495 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:05:17.495 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:05:17.752 true 00:05:17.752 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2164416 00:05:17.752 08:49:33 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:18.684 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:18.684 08:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:18.684 08:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:05:18.685 08:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:05:18.942 true 00:05:18.942 08:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2164416 00:05:18.942 08:49:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:19.198 08:49:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:19.456 08:49:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:05:19.456 08:49:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:05:19.456 true 00:05:19.456 08:49:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2164416 00:05:19.456 08:49:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:20.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.828 08:49:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:20.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.828 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:20.828 08:49:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:05:20.828 08:49:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:05:21.085 true 00:05:21.085 08:49:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2164416 00:05:21.085 08:49:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.016 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:22.016 08:49:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.016 08:49:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:05:22.016 08:49:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:05:22.274 true 00:05:22.274 08:49:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2164416 00:05:22.274 08:49:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:22.530 08:49:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:22.530 08:49:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:05:22.530 08:49:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:05:22.787 true 00:05:22.787 08:49:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2164416 00:05:22.787 08:49:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:24.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.159 08:49:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:24.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.159 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:24.159 08:49:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:05:24.159 08:49:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:05:24.416 true 00:05:24.416 08:49:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2164416 00:05:24.416 08:49:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.348 08:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:25.348 08:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:05:25.348 08:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:05:25.605 true 00:05:25.605 08:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2164416 00:05:25.605 08:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:25.862 08:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:26.120 08:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:05:26.120 08:49:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:05:26.120 true 00:05:26.120 08:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2164416 00:05:26.120 08:49:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:27.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.491 08:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:27.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.491 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:27.491 08:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:05:27.491 08:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:05:27.748 true 00:05:27.748 08:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2164416 00:05:27.748 08:49:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:28.679 08:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:28.679 08:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:05:28.679 08:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:05:28.936 true 00:05:28.936 08:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2164416 00:05:28.936 08:49:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:29.193 08:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:29.450 08:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:05:29.450 08:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:05:29.450 true 00:05:29.450 08:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2164416 00:05:29.450 08:49:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:30.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.822 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:30.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.822 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:30.822 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:05:30.822 08:49:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:05:31.079 true 00:05:31.079 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2164416 00:05:31.079 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.011 08:49:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:32.011 08:49:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:05:32.011 08:49:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:05:32.011 Initializing NVMe Controllers 00:05:32.011 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:32.011 Controller IO queue size 128, less than required. 00:05:32.011 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:32.011 Controller IO queue size 128, less than required. 00:05:32.011 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:32.011 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:05:32.011 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:05:32.011 Initialization complete. Launching workers. 00:05:32.011 ======================================================== 00:05:32.011 Latency(us) 00:05:32.011 Device Information : IOPS MiB/s Average min max 00:05:32.011 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1903.35 0.93 44193.29 2675.78 1096523.69 00:05:32.011 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 16628.33 8.12 7697.67 1608.17 457016.44 00:05:32.011 ======================================================== 00:05:32.011 Total : 18531.67 9.05 11446.06 1608.17 1096523.69 00:05:32.011 00:05:32.269 true 00:05:32.269 08:49:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2164416 00:05:32.269 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2164416) - No such process 00:05:32.269 08:49:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2164416 00:05:32.269 08:49:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:32.527 08:49:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:32.784 08:49:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:05:32.784 08:49:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:05:32.784 08:49:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:05:32.784 08:49:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:32.784 08:49:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:05:32.784 null0 00:05:32.784 08:49:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:32.784 08:49:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:32.784 08:49:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:05:33.041 null1 00:05:33.041 08:49:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:33.041 08:49:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:33.041 08:49:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:05:33.298 null2 00:05:33.298 08:49:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:33.298 08:49:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:33.298 08:49:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:05:33.554 null3 00:05:33.554 08:49:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:33.554 08:49:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:33.554 08:49:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:05:33.554 null4 00:05:33.810 08:49:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:33.810 08:49:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:33.810 08:49:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:05:33.810 null5 00:05:33.810 08:49:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:33.810 08:49:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:33.810 08:49:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:05:34.066 null6 00:05:34.066 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:34.066 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:34.066 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:05:34.323 null7 00:05:34.323 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:05:34.323 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:05:34.323 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:05:34.323 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:34.323 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:34.323 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:34.323 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:05:34.323 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:34.323 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:05:34.323 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:34.323 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.323 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:34.323 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:34.323 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:34.323 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:05:34.323 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:34.323 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2170015 2170016 2170018 2170020 2170022 2170024 2170026 2170028 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.324 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:34.580 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.580 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:34.580 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:34.580 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:34.580 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:34.580 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:34.580 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:34.580 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:34.837 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.837 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.837 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:34.837 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.837 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.837 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:34.837 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.837 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.837 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.837 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:34.837 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.837 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:34.837 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.837 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.837 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:34.837 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.837 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.837 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:34.837 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.837 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.837 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:34.837 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:34.837 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:34.837 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:34.838 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:34.838 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:34.838 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:34.838 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:34.838 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:34.838 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:34.838 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:34.838 08:49:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:35.095 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.095 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.095 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:35.095 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.095 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.095 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:35.095 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.095 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.095 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:35.095 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.095 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.096 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:35.096 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.096 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.096 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.096 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:35.096 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.096 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:35.096 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.096 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.096 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:35.096 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.096 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.096 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:35.352 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.352 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:35.352 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:35.352 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:35.352 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:35.352 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:35.352 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:35.352 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:35.609 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.609 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.609 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:35.609 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.609 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.609 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:35.609 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.609 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.609 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:35.609 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.609 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.609 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:35.609 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.609 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.609 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:35.609 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.609 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.609 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:35.609 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.609 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.609 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:35.609 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.609 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.609 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:35.867 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:35.867 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:35.867 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:35.867 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:35.867 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:35.867 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:35.867 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:35.867 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:35.867 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.867 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.867 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:35.867 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.867 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.867 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:35.867 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:35.867 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:35.867 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:36.125 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.125 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.125 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:36.125 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.125 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.125 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:36.125 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.125 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.125 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.125 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:36.125 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.125 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:36.125 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.125 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.125 08:49:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:36.125 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.125 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:36.125 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:36.125 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:36.125 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:36.125 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:36.125 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:36.125 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:36.383 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.383 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.383 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:36.383 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.383 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.383 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:36.383 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.383 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.383 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:36.383 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.383 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.383 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:36.383 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.383 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.383 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:36.383 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.383 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.383 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:36.383 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.383 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.383 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:36.383 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.383 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.383 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:36.640 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:36.640 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:36.640 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:36.640 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:36.640 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:36.640 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:36.640 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:36.640 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:36.898 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.898 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.898 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:36.898 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.898 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.898 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:36.898 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.898 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.898 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:36.898 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.898 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.898 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:36.898 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.898 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.898 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:36.898 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.898 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.898 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:36.898 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.898 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.898 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:36.898 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:36.898 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:36.898 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:36.898 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:36.898 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:37.156 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:37.156 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:37.156 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:37.156 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:37.156 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.156 08:49:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:37.156 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.156 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.156 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:37.156 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.156 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.156 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:37.156 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.156 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.156 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:37.156 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.156 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.156 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:37.156 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.156 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.156 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:37.156 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.156 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.156 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:37.156 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.156 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.156 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:37.156 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.156 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.156 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:37.414 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:37.414 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:37.414 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:37.414 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.414 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:37.414 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:37.414 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:37.414 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:37.672 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.672 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.672 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.672 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.672 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:37.672 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:37.672 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.672 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.672 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:37.672 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.672 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.672 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:37.672 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.672 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.672 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:37.672 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.672 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.672 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:37.672 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.672 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.672 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:37.672 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:37.672 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:37.672 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:37.930 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:37.931 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:37.931 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:37.931 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:37.931 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:37.931 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:37.931 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:37.931 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:38.188 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.189 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.189 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:05:38.189 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.189 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.189 08:49:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:05:38.189 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.189 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.189 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:05:38.189 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.189 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.189 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:05:38.189 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.189 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.189 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:05:38.189 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.189 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.189 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:05:38.189 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.189 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.189 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:05:38.189 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.189 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.189 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:05:38.189 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:05:38.189 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:38.189 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:05:38.189 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:05:38.189 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:05:38.447 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:05:38.447 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:05:38.447 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:05:38.447 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.447 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.447 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.447 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.447 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.447 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.447 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.447 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.447 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.447 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.447 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.447 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.447 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.447 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:05:38.447 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.447 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:05:38.447 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:05:38.447 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:05:38.447 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:05:38.447 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@99 -- # sync 00:05:38.447 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:05:38.447 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # set +e 00:05:38.447 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:05:38.447 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:05:38.447 rmmod nvme_tcp 00:05:38.447 rmmod nvme_fabrics 00:05:38.447 rmmod nvme_keyring 00:05:38.706 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:05:38.706 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # set -e 00:05:38.706 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # return 0 00:05:38.706 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # '[' -n 2164146 ']' 00:05:38.706 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@337 -- # killprocess 2164146 00:05:38.706 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2164146 ']' 00:05:38.706 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2164146 00:05:38.706 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:05:38.706 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:38.706 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2164146 00:05:38.706 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:38.706 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:38.706 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2164146' 00:05:38.706 killing process with pid 2164146 00:05:38.706 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2164146 00:05:38.706 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2164146 00:05:38.706 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:05:38.706 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:05:38.706 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@264 -- # local dev 00:05:38.706 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@267 -- # remove_target_ns 00:05:38.706 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:05:38.706 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:05:38.706 08:49:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@268 -- # delete_main_bridge 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@130 -- # return 0 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # _dev=0 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@284 -- # iptr 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@542 -- # iptables-save 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@542 -- # iptables-restore 00:05:41.246 00:05:41.246 real 0m47.520s 00:05:41.246 user 3m12.861s 00:05:41.246 sys 0m15.568s 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:41.246 ************************************ 00:05:41.246 END TEST nvmf_ns_hotplug_stress 00:05:41.246 ************************************ 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:41.246 ************************************ 00:05:41.246 START TEST nvmf_delete_subsystem 00:05:41.246 ************************************ 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:05:41.246 * Looking for test storage... 00:05:41.246 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:05:41.246 08:49:56 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:41.246 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:41.246 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:41.246 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:41.246 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:41.246 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:05:41.246 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:05:41.246 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:05:41.246 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:05:41.246 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:05:41.246 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:41.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.247 --rc genhtml_branch_coverage=1 00:05:41.247 --rc genhtml_function_coverage=1 00:05:41.247 --rc genhtml_legend=1 00:05:41.247 --rc geninfo_all_blocks=1 00:05:41.247 --rc geninfo_unexecuted_blocks=1 00:05:41.247 00:05:41.247 ' 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:41.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.247 --rc genhtml_branch_coverage=1 00:05:41.247 --rc genhtml_function_coverage=1 00:05:41.247 --rc genhtml_legend=1 00:05:41.247 --rc geninfo_all_blocks=1 00:05:41.247 --rc geninfo_unexecuted_blocks=1 00:05:41.247 00:05:41.247 ' 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:41.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.247 --rc genhtml_branch_coverage=1 00:05:41.247 --rc genhtml_function_coverage=1 00:05:41.247 --rc genhtml_legend=1 00:05:41.247 --rc geninfo_all_blocks=1 00:05:41.247 --rc geninfo_unexecuted_blocks=1 00:05:41.247 00:05:41.247 ' 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:41.247 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:41.247 --rc genhtml_branch_coverage=1 00:05:41.247 --rc genhtml_function_coverage=1 00:05:41.247 --rc genhtml_legend=1 00:05:41.247 --rc geninfo_all_blocks=1 00:05:41.247 --rc geninfo_unexecuted_blocks=1 00:05:41.247 00:05:41.247 ' 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@50 -- # : 0 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:05:41.247 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:05:41.248 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:05:41.248 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:41.248 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:41.248 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:05:41.248 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:05:41.248 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:05:41.248 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:05:41.248 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:05:41.248 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:05:41.248 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:05:41.248 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:41.248 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:05:41.248 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:05:41.248 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # remove_target_ns 00:05:41.248 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:05:41.248 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:05:41.248 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:05:41.248 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:05:41.248 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:05:41.248 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # xtrace_disable 00:05:41.248 08:49:57 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # pci_devs=() 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # local -a pci_devs 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # pci_net_devs=() 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # pci_drivers=() 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # local -A pci_drivers 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # net_devs=() 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # local -ga net_devs 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # e810=() 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # local -ga e810 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # x722=() 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # local -ga x722 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # mlx=() 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # local -ga mlx 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:47.823 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:47.823 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:47.823 Found net devices under 0000:86:00.0: cvl_0_0 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:47.823 Found net devices under 0000:86:00.1: cvl_0_1 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # is_hw=yes 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@257 -- # create_target_ns 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:47.823 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@28 -- # local -g _dev 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # ips=() 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772161 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:05:47.824 10.0.0.1 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772162 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:05:47.824 08:50:02 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:05:47.824 10.0.0.2 00:05:47.824 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:05:47.824 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:05:47.824 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:05:47.824 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:05:47.824 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:05:47.824 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:05:47.824 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:05:47.824 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:05:47.824 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:05:47.824 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:05:47.824 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:05:47.824 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:05:47.824 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:05:47.824 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:05:47.824 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:05:47.824 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:05:47.824 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:05:47.824 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:05:47.824 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:05:47.824 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:05:47.824 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@38 -- # ping_ips 1 00:05:47.824 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:05:47.824 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:05:47.824 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:05:47.824 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:05:47.824 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:05:47.824 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:05:47.824 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:05:47.824 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=initiator0 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:05:47.825 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:47.825 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.415 ms 00:05:47.825 00:05:47.825 --- 10.0.0.1 ping statistics --- 00:05:47.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:47.825 rtt min/avg/max/mdev = 0.415/0.415/0.415/0.000 ms 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=target0 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:05:47.825 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:47.825 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:05:47.825 00:05:47.825 --- 10.0.0.2 ping statistics --- 00:05:47.825 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:47.825 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair++ )) 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # return 0 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=initiator0 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=initiator1 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # return 1 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev= 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@169 -- # return 0 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:05:47.825 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=target0 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev target1 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=target1 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # return 1 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev= 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@169 -- # return 0 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # nvmfpid=2174443 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # waitforlisten 2174443 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2174443 ']' 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:47.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:47.826 08:50:03 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:47.826 [2024-11-20 08:50:03.308485] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:05:47.826 [2024-11-20 08:50:03.308535] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:47.826 [2024-11-20 08:50:03.389625] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:47.826 [2024-11-20 08:50:03.431138] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:47.826 [2024-11-20 08:50:03.431172] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:47.826 [2024-11-20 08:50:03.431180] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:47.826 [2024-11-20 08:50:03.431186] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:47.826 [2024-11-20 08:50:03.431191] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:47.826 [2024-11-20 08:50:03.432384] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:47.826 [2024-11-20 08:50:03.432385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:48.394 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:48.394 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:05:48.394 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:05:48.394 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:48.394 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:48.394 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:48.394 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:05:48.394 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.394 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:48.394 [2024-11-20 08:50:04.186257] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:48.394 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.395 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:48.395 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.395 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:48.395 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.395 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:48.395 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.395 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:48.395 [2024-11-20 08:50:04.206465] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:48.395 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.395 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:05:48.395 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.395 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:48.395 NULL1 00:05:48.395 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.395 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:48.395 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.395 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:48.395 Delay0 00:05:48.395 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.395 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.395 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:48.395 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:48.395 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:48.395 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2174691 00:05:48.395 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:05:48.395 08:50:04 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:48.395 [2024-11-20 08:50:04.318229] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:50.294 08:50:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:05:50.294 08:50:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:50.294 08:50:06 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:50.551 Write completed with error (sct=0, sc=8) 00:05:50.551 starting I/O failed: -6 00:05:50.551 Write completed with error (sct=0, sc=8) 00:05:50.551 Write completed with error (sct=0, sc=8) 00:05:50.551 Read completed with error (sct=0, sc=8) 00:05:50.551 Read completed with error (sct=0, sc=8) 00:05:50.551 starting I/O failed: -6 00:05:50.551 Read completed with error (sct=0, sc=8) 00:05:50.551 Write completed with error (sct=0, sc=8) 00:05:50.551 Read completed with error (sct=0, sc=8) 00:05:50.551 Read completed with error (sct=0, sc=8) 00:05:50.551 starting I/O failed: -6 00:05:50.551 Write completed with error (sct=0, sc=8) 00:05:50.551 Write completed with error (sct=0, sc=8) 00:05:50.551 Read completed with error (sct=0, sc=8) 00:05:50.551 Write completed with error (sct=0, sc=8) 00:05:50.551 starting I/O failed: -6 00:05:50.551 Write completed with error (sct=0, sc=8) 00:05:50.551 Read completed with error (sct=0, sc=8) 00:05:50.551 Read completed with error (sct=0, sc=8) 00:05:50.551 Read completed with error (sct=0, sc=8) 00:05:50.551 starting I/O failed: -6 00:05:50.551 Read completed with error (sct=0, sc=8) 00:05:50.551 Read completed with error (sct=0, sc=8) 00:05:50.551 Read completed with error (sct=0, sc=8) 00:05:50.551 Write completed with error (sct=0, sc=8) 00:05:50.551 starting I/O failed: -6 00:05:50.551 Read completed with error (sct=0, sc=8) 00:05:50.551 Read completed with error (sct=0, sc=8) 00:05:50.551 Read completed with error (sct=0, sc=8) 00:05:50.551 Read completed with error (sct=0, sc=8) 00:05:50.551 starting I/O failed: -6 00:05:50.551 Read completed with error (sct=0, sc=8) 00:05:50.551 Write completed with error (sct=0, sc=8) 00:05:50.551 Read completed with error (sct=0, sc=8) 00:05:50.551 Write completed with error (sct=0, sc=8) 00:05:50.551 starting I/O failed: -6 00:05:50.551 Write completed with error (sct=0, sc=8) 00:05:50.551 Write completed with error (sct=0, sc=8) 00:05:50.551 Read completed with error (sct=0, sc=8) 00:05:50.551 Read completed with error (sct=0, sc=8) 00:05:50.551 starting I/O failed: -6 00:05:50.551 Write completed with error (sct=0, sc=8) 00:05:50.551 Write completed with error (sct=0, sc=8) 00:05:50.551 Read completed with error (sct=0, sc=8) 00:05:50.551 Read completed with error (sct=0, sc=8) 00:05:50.551 starting I/O failed: -6 00:05:50.551 Write completed with error (sct=0, sc=8) 00:05:50.551 Read completed with error (sct=0, sc=8) 00:05:50.551 [2024-11-20 08:50:06.437781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f890400d020 is same with the state(6) to be set 00:05:50.551 Read completed with error (sct=0, sc=8) 00:05:50.551 Read completed with error (sct=0, sc=8) 00:05:50.551 starting I/O failed: -6 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 starting I/O failed: -6 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 starting I/O failed: -6 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 starting I/O failed: -6 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 starting I/O failed: -6 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 starting I/O failed: -6 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 starting I/O failed: -6 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 starting I/O failed: -6 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 starting I/O failed: -6 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 starting I/O failed: -6 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 starting I/O failed: -6 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 starting I/O failed: -6 00:05:50.552 [2024-11-20 08:50:06.438208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d30780 is same with the state(6) to be set 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 [2024-11-20 08:50:06.438567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f890400d350 is same with the state(6) to be set 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Read completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:50.552 Write completed with error (sct=0, sc=8) 00:05:51.485 [2024-11-20 08:50:07.412785] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d319a0 is same with the state(6) to be set 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Write completed with error (sct=0, sc=8) 00:05:51.485 Write completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Write completed with error (sct=0, sc=8) 00:05:51.485 Write completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 [2024-11-20 08:50:07.436090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d30960 is same with the state(6) to be set 00:05:51.485 Write completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Write completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Write completed with error (sct=0, sc=8) 00:05:51.485 Write completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Write completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Write completed with error (sct=0, sc=8) 00:05:51.485 Write completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 [2024-11-20 08:50:07.436217] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d30b40 is same with the state(6) to be set 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Write completed with error (sct=0, sc=8) 00:05:51.485 Write completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Write completed with error (sct=0, sc=8) 00:05:51.485 Write completed with error (sct=0, sc=8) 00:05:51.485 Write completed with error (sct=0, sc=8) 00:05:51.485 Write completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 [2024-11-20 08:50:07.441270] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f8904000c40 is same with the state(6) to be set 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Write completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Write completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Write completed with error (sct=0, sc=8) 00:05:51.485 Write completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Read completed with error (sct=0, sc=8) 00:05:51.485 Write completed with error (sct=0, sc=8) 00:05:51.485 Write completed with error (sct=0, sc=8) 00:05:51.485 [2024-11-20 08:50:07.441847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f890400d680 is same with the state(6) to be set 00:05:51.485 Initializing NVMe Controllers 00:05:51.485 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:51.485 Controller IO queue size 128, less than required. 00:05:51.485 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:51.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:05:51.485 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:05:51.485 Initialization complete. Launching workers. 00:05:51.485 ======================================================== 00:05:51.485 Latency(us) 00:05:51.485 Device Information : IOPS MiB/s Average min max 00:05:51.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 171.86 0.08 888180.98 406.70 1010017.45 00:05:51.485 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.97 0.08 929445.32 796.63 1044451.26 00:05:51.485 ======================================================== 00:05:51.485 Total : 327.83 0.16 907812.80 406.70 1044451.26 00:05:51.485 00:05:51.485 [2024-11-20 08:50:07.442413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d319a0 (9): Bad file descriptor 00:05:51.485 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:05:51.485 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:51.485 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:05:51.485 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2174691 00:05:51.485 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:05:52.050 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:05:52.050 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2174691 00:05:52.050 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2174691) - No such process 00:05:52.050 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2174691 00:05:52.050 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:05:52.050 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2174691 00:05:52.050 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:05:52.050 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.050 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:05:52.050 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:52.050 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2174691 00:05:52.050 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:05:52.050 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:52.050 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:52.050 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:52.050 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:52.050 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.050 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:52.050 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.050 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:52.050 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.050 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:52.050 [2024-11-20 08:50:07.970531] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:52.050 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.050 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.050 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:52.050 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:52.050 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:52.050 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2175368 00:05:52.050 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:05:52.050 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:05:52.050 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2175368 00:05:52.050 08:50:07 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:52.050 [2024-11-20 08:50:08.059762] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:05:52.614 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:52.615 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2175368 00:05:52.615 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:53.178 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:53.179 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2175368 00:05:53.179 08:50:08 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:53.744 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:53.744 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2175368 00:05:53.745 08:50:09 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:54.002 08:50:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:54.002 08:50:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2175368 00:05:54.002 08:50:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:54.566 08:50:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:54.566 08:50:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2175368 00:05:54.566 08:50:10 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:55.131 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:55.131 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2175368 00:05:55.131 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:05:55.391 Initializing NVMe Controllers 00:05:55.391 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:05:55.391 Controller IO queue size 128, less than required. 00:05:55.391 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:05:55.391 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:05:55.391 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:05:55.391 Initialization complete. Launching workers. 00:05:55.391 ======================================================== 00:05:55.391 Latency(us) 00:05:55.391 Device Information : IOPS MiB/s Average min max 00:05:55.391 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1001976.94 1000163.15 1005532.64 00:05:55.391 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1003443.36 1000117.07 1009444.69 00:05:55.391 ======================================================== 00:05:55.391 Total : 256.00 0.12 1002710.15 1000117.07 1009444.69 00:05:55.391 00:05:55.649 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:05:55.649 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2175368 00:05:55.649 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2175368) - No such process 00:05:55.649 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2175368 00:05:55.649 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:05:55.649 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:05:55.649 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:05:55.649 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@99 -- # sync 00:05:55.649 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:05:55.649 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # set +e 00:05:55.649 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:05:55.649 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:05:55.649 rmmod nvme_tcp 00:05:55.649 rmmod nvme_fabrics 00:05:55.649 rmmod nvme_keyring 00:05:55.650 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:05:55.650 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # set -e 00:05:55.650 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # return 0 00:05:55.650 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # '[' -n 2174443 ']' 00:05:55.650 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@337 -- # killprocess 2174443 00:05:55.650 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2174443 ']' 00:05:55.650 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2174443 00:05:55.650 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:05:55.650 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:55.650 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2174443 00:05:55.650 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:55.650 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:55.650 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2174443' 00:05:55.650 killing process with pid 2174443 00:05:55.650 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2174443 00:05:55.650 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2174443 00:05:55.909 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:05:55.909 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # nvmf_fini 00:05:55.909 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@264 -- # local dev 00:05:55.909 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@267 -- # remove_target_ns 00:05:55.909 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:05:55.909 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:05:55.909 08:50:11 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:05:57.815 08:50:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@268 -- # delete_main_bridge 00:05:57.815 08:50:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:05:57.815 08:50:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@130 -- # return 0 00:05:57.815 08:50:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:05:57.815 08:50:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:05:57.815 08:50:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:05:57.815 08:50:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:05:57.815 08:50:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:05:57.815 08:50:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:05:57.815 08:50:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:05:57.815 08:50:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:05:57.815 08:50:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:05:57.815 08:50:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:05:57.815 08:50:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:05:57.815 08:50:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:05:57.815 08:50:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:05:57.815 08:50:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:05:57.815 08:50:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:05:57.815 08:50:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:05:58.074 08:50:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:05:58.074 08:50:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # _dev=0 00:05:58.074 08:50:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # dev_map=() 00:05:58.074 08:50:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@284 -- # iptr 00:05:58.074 08:50:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@542 -- # iptables-save 00:05:58.074 08:50:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:05:58.074 08:50:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@542 -- # iptables-restore 00:05:58.074 00:05:58.074 real 0m16.980s 00:05:58.074 user 0m30.481s 00:05:58.074 sys 0m5.773s 00:05:58.074 08:50:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:58.074 08:50:13 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:05:58.074 ************************************ 00:05:58.074 END TEST nvmf_delete_subsystem 00:05:58.074 ************************************ 00:05:58.074 08:50:13 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:05:58.074 08:50:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:58.074 08:50:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:58.074 08:50:13 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:58.074 ************************************ 00:05:58.074 START TEST nvmf_host_management 00:05:58.074 ************************************ 00:05:58.074 08:50:13 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:05:58.074 * Looking for test storage... 00:05:58.074 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:58.074 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:58.074 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:05:58.074 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:58.074 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:58.074 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:58.074 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:58.074 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:58.074 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:05:58.074 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:05:58.074 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:05:58.074 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:05:58.074 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:05:58.074 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:05:58.074 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:05:58.074 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:58.074 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:05:58.074 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:05:58.074 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:58.074 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:58.074 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:05:58.074 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:05:58.074 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:58.074 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:58.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.340 --rc genhtml_branch_coverage=1 00:05:58.340 --rc genhtml_function_coverage=1 00:05:58.340 --rc genhtml_legend=1 00:05:58.340 --rc geninfo_all_blocks=1 00:05:58.340 --rc geninfo_unexecuted_blocks=1 00:05:58.340 00:05:58.340 ' 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:58.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.340 --rc genhtml_branch_coverage=1 00:05:58.340 --rc genhtml_function_coverage=1 00:05:58.340 --rc genhtml_legend=1 00:05:58.340 --rc geninfo_all_blocks=1 00:05:58.340 --rc geninfo_unexecuted_blocks=1 00:05:58.340 00:05:58.340 ' 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:58.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.340 --rc genhtml_branch_coverage=1 00:05:58.340 --rc genhtml_function_coverage=1 00:05:58.340 --rc genhtml_legend=1 00:05:58.340 --rc geninfo_all_blocks=1 00:05:58.340 --rc geninfo_unexecuted_blocks=1 00:05:58.340 00:05:58.340 ' 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:58.340 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:58.340 --rc genhtml_branch_coverage=1 00:05:58.340 --rc genhtml_function_coverage=1 00:05:58.340 --rc genhtml_legend=1 00:05:58.340 --rc geninfo_all_blocks=1 00:05:58.340 --rc geninfo_unexecuted_blocks=1 00:05:58.340 00:05:58.340 ' 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.340 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@50 -- # : 0 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:05:58.341 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@54 -- # have_pci_nics=0 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # prepare_net_devs 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # local -g is_hw=no 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # remove_target_ns 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # xtrace_disable 00:05:58.341 08:50:14 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@131 -- # pci_devs=() 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@131 -- # local -a pci_devs 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@132 -- # pci_net_devs=() 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@133 -- # pci_drivers=() 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@133 -- # local -A pci_drivers 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@135 -- # net_devs=() 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@135 -- # local -ga net_devs 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@136 -- # e810=() 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@136 -- # local -ga e810 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@137 -- # x722=() 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@137 -- # local -ga x722 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@138 -- # mlx=() 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@138 -- # local -ga mlx 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:05.081 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:06:05.081 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:05.082 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:05.082 Found net devices under 0000:86:00.0: cvl_0_0 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:05.082 Found net devices under 0000:86:00.1: cvl_0_1 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # is_hw=yes 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@257 -- # create_target_ns 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@27 -- # local -gA dev_map 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@28 -- # local -g _dev 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # ips=() 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:06:05.082 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:06:05.083 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:06:05.083 08:50:19 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772161 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:06:05.083 10.0.0.1 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772162 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:06:05.083 10.0.0.2 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@38 -- # ping_ips 1 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=initiator0 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:06:05.083 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:06:05.084 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:05.084 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.498 ms 00:06:05.084 00:06:05.084 --- 10.0.0.1 ping statistics --- 00:06:05.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:05.084 rtt min/avg/max/mdev = 0.498/0.498/0.498/0.000 ms 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target0 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target0 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:06:05.084 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:05.084 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:06:05.084 00:06:05.084 --- 10.0.0.2 ping statistics --- 00:06:05.084 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:05.084 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair++ )) 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # return 0 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=initiator0 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=initiator1 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # return 1 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev= 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@169 -- # return 0 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target0 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target0 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:06:05.084 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target1 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target1 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # return 1 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev= 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@169 -- # return 0 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # nvmfpid=2179468 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # waitforlisten 2179468 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2179468 ']' 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:05.085 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:05.085 [2024-11-20 08:50:20.364212] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:06:05.085 [2024-11-20 08:50:20.364263] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:05.085 [2024-11-20 08:50:20.444365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:05.085 [2024-11-20 08:50:20.487972] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:05.085 [2024-11-20 08:50:20.488010] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:05.085 [2024-11-20 08:50:20.488017] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:05.085 [2024-11-20 08:50:20.488023] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:05.085 [2024-11-20 08:50:20.488028] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:05.085 [2024-11-20 08:50:20.489568] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:05.085 [2024-11-20 08:50:20.489675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:06:05.085 [2024-11-20 08:50:20.489743] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:06:05.085 [2024-11-20 08:50:20.489744] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:05.085 [2024-11-20 08:50:20.627013] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.085 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:05.085 Malloc0 00:06:05.086 [2024-11-20 08:50:20.711881] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:05.086 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.086 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:05.086 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:05.086 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:05.086 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2179690 00:06:05.086 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2179690 /var/tmp/bdevperf.sock 00:06:05.086 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2179690 ']' 00:06:05.086 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:05.086 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:05.086 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:05.086 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:05.086 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:05.086 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:06:05.086 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:05.086 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:05.086 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:06:05.086 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:05.086 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:06:05.086 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:06:05.086 { 00:06:05.086 "params": { 00:06:05.086 "name": "Nvme$subsystem", 00:06:05.086 "trtype": "$TEST_TRANSPORT", 00:06:05.086 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:05.086 "adrfam": "ipv4", 00:06:05.086 "trsvcid": "$NVMF_PORT", 00:06:05.086 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:05.086 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:05.086 "hdgst": ${hdgst:-false}, 00:06:05.086 "ddgst": ${ddgst:-false} 00:06:05.086 }, 00:06:05.086 "method": "bdev_nvme_attach_controller" 00:06:05.086 } 00:06:05.086 EOF 00:06:05.086 )") 00:06:05.086 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:06:05.086 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:06:05.086 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:06:05.086 08:50:20 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:06:05.086 "params": { 00:06:05.086 "name": "Nvme0", 00:06:05.086 "trtype": "tcp", 00:06:05.086 "traddr": "10.0.0.2", 00:06:05.086 "adrfam": "ipv4", 00:06:05.086 "trsvcid": "4420", 00:06:05.086 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:05.086 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:05.086 "hdgst": false, 00:06:05.086 "ddgst": false 00:06:05.086 }, 00:06:05.086 "method": "bdev_nvme_attach_controller" 00:06:05.086 }' 00:06:05.086 [2024-11-20 08:50:20.808222] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:06:05.086 [2024-11-20 08:50:20.808269] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2179690 ] 00:06:05.086 [2024-11-20 08:50:20.883636] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.086 [2024-11-20 08:50:20.924930] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.347 Running I/O for 10 seconds... 00:06:05.347 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:05.347 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:06:05.347 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:05.347 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.347 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:05.347 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.347 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:05.347 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:05.347 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:05.347 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:05.347 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:05.347 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:05.347 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:05.347 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:05.347 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:05.347 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.347 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:05.347 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:05.347 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.347 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=105 00:06:05.347 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 105 -ge 100 ']' 00:06:05.347 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:05.347 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:05.347 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:05.347 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:05.347 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.347 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:05.347 [2024-11-20 08:50:21.234913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.347 [2024-11-20 08:50:21.234956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.347 [2024-11-20 08:50:21.234967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.347 [2024-11-20 08:50:21.234975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.347 [2024-11-20 08:50:21.234983] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.347 [2024-11-20 08:50:21.234989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.347 [2024-11-20 08:50:21.234997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:05.347 [2024-11-20 08:50:21.235003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.347 [2024-11-20 08:50:21.235010] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x565500 is same with the state(6) to be set 00:06:05.347 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.347 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:05.347 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:05.347 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:05.347 [2024-11-20 08:50:21.242116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.347 [2024-11-20 08:50:21.242140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.347 [2024-11-20 08:50:21.242154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.347 [2024-11-20 08:50:21.242162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.347 [2024-11-20 08:50:21.242175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.347 [2024-11-20 08:50:21.242182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.348 [2024-11-20 08:50:21.242667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.348 [2024-11-20 08:50:21.242673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.349 [2024-11-20 08:50:21.242681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.349 [2024-11-20 08:50:21.242688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.349 [2024-11-20 08:50:21.242696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.349 [2024-11-20 08:50:21.242703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.349 [2024-11-20 08:50:21.242711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.349 [2024-11-20 08:50:21.242717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.349 [2024-11-20 08:50:21.242726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.349 [2024-11-20 08:50:21.242733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.349 [2024-11-20 08:50:21.242741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.349 [2024-11-20 08:50:21.242752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.349 [2024-11-20 08:50:21.242761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.349 [2024-11-20 08:50:21.242767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.349 [2024-11-20 08:50:21.242775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.349 [2024-11-20 08:50:21.242782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.349 [2024-11-20 08:50:21.242791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.349 [2024-11-20 08:50:21.242798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.349 [2024-11-20 08:50:21.242806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.349 [2024-11-20 08:50:21.242813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.349 [2024-11-20 08:50:21.242821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.349 [2024-11-20 08:50:21.242827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.349 [2024-11-20 08:50:21.242836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.349 [2024-11-20 08:50:21.242843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.349 [2024-11-20 08:50:21.242852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.349 [2024-11-20 08:50:21.242859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.349 [2024-11-20 08:50:21.242867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.349 [2024-11-20 08:50:21.242874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.349 [2024-11-20 08:50:21.242882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.349 [2024-11-20 08:50:21.242890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.349 [2024-11-20 08:50:21.242898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.349 [2024-11-20 08:50:21.242906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.349 [2024-11-20 08:50:21.242914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.349 [2024-11-20 08:50:21.242920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.349 [2024-11-20 08:50:21.242928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.349 [2024-11-20 08:50:21.242936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.349 [2024-11-20 08:50:21.242945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.349 [2024-11-20 08:50:21.242959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.349 [2024-11-20 08:50:21.242967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.349 [2024-11-20 08:50:21.242974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.349 [2024-11-20 08:50:21.242982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.349 [2024-11-20 08:50:21.242989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.349 [2024-11-20 08:50:21.242997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.349 [2024-11-20 08:50:21.243005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.349 [2024-11-20 08:50:21.243013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.349 [2024-11-20 08:50:21.243020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.349 [2024-11-20 08:50:21.243028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.349 [2024-11-20 08:50:21.243035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.349 [2024-11-20 08:50:21.243044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.349 [2024-11-20 08:50:21.243051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.349 [2024-11-20 08:50:21.243059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.349 [2024-11-20 08:50:21.243066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.349 [2024-11-20 08:50:21.243074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.349 [2024-11-20 08:50:21.243081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.349 [2024-11-20 08:50:21.243089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.349 [2024-11-20 08:50:21.243096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.349 [2024-11-20 08:50:21.243104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.349 [2024-11-20 08:50:21.243111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.349 [2024-11-20 08:50:21.243119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:05.349 [2024-11-20 08:50:21.243126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:05.349 [2024-11-20 08:50:21.244078] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:06:05.349 task offset: 24576 on job bdev=Nvme0n1 fails 00:06:05.349 00:06:05.349 Latency(us) 00:06:05.349 [2024-11-20T07:50:21.390Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:05.349 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:05.349 Job: Nvme0n1 ended in about 0.11 seconds with error 00:06:05.349 Verification LBA range: start 0x0 length 0x400 00:06:05.349 Nvme0n1 : 0.11 1704.14 106.51 568.05 0.00 25988.87 1346.34 27810.06 00:06:05.349 [2024-11-20T07:50:21.390Z] =================================================================================================================== 00:06:05.349 [2024-11-20T07:50:21.391Z] Total : 1704.14 106.51 568.05 0.00 25988.87 1346.34 27810.06 00:06:05.350 [2024-11-20 08:50:21.246486] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:05.350 [2024-11-20 08:50:21.246506] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x565500 (9): Bad file descriptor 00:06:05.350 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:05.350 08:50:21 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:05.350 [2024-11-20 08:50:21.296068] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:06:06.284 08:50:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2179690 00:06:06.284 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2179690) - No such process 00:06:06.284 08:50:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:06.284 08:50:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:06.284 08:50:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:06.284 08:50:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:06.284 08:50:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:06:06.284 08:50:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:06:06.284 08:50:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:06:06.284 08:50:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:06:06.284 { 00:06:06.284 "params": { 00:06:06.284 "name": "Nvme$subsystem", 00:06:06.284 "trtype": "$TEST_TRANSPORT", 00:06:06.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:06.284 "adrfam": "ipv4", 00:06:06.284 "trsvcid": "$NVMF_PORT", 00:06:06.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:06.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:06.284 "hdgst": ${hdgst:-false}, 00:06:06.284 "ddgst": ${ddgst:-false} 00:06:06.284 }, 00:06:06.284 "method": "bdev_nvme_attach_controller" 00:06:06.284 } 00:06:06.284 EOF 00:06:06.284 )") 00:06:06.284 08:50:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:06:06.284 08:50:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:06:06.284 08:50:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:06:06.284 08:50:22 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:06:06.284 "params": { 00:06:06.284 "name": "Nvme0", 00:06:06.284 "trtype": "tcp", 00:06:06.284 "traddr": "10.0.0.2", 00:06:06.284 "adrfam": "ipv4", 00:06:06.284 "trsvcid": "4420", 00:06:06.284 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:06.284 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:06.284 "hdgst": false, 00:06:06.284 "ddgst": false 00:06:06.284 }, 00:06:06.284 "method": "bdev_nvme_attach_controller" 00:06:06.284 }' 00:06:06.284 [2024-11-20 08:50:22.307770] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:06:06.284 [2024-11-20 08:50:22.307818] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2179938 ] 00:06:06.542 [2024-11-20 08:50:22.384579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:06.542 [2024-11-20 08:50:22.424133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:06.801 Running I/O for 1 seconds... 00:06:07.736 1945.00 IOPS, 121.56 MiB/s 00:06:07.736 Latency(us) 00:06:07.736 [2024-11-20T07:50:23.777Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:07.736 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:07.736 Verification LBA range: start 0x0 length 0x400 00:06:07.736 Nvme0n1 : 1.01 1991.35 124.46 0.00 0.00 31517.96 1966.08 27810.06 00:06:07.736 [2024-11-20T07:50:23.777Z] =================================================================================================================== 00:06:07.736 [2024-11-20T07:50:23.777Z] Total : 1991.35 124.46 0.00 0.00 31517.96 1966.08 27810.06 00:06:07.995 08:50:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:07.995 08:50:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:07.995 08:50:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:07.995 08:50:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:07.995 08:50:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:07.995 08:50:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # nvmfcleanup 00:06:07.995 08:50:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@99 -- # sync 00:06:07.995 08:50:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:06:07.995 08:50:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # set +e 00:06:07.995 08:50:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # for i in {1..20} 00:06:07.995 08:50:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:06:07.995 rmmod nvme_tcp 00:06:07.995 rmmod nvme_fabrics 00:06:07.995 rmmod nvme_keyring 00:06:07.995 08:50:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:06:07.995 08:50:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # set -e 00:06:07.995 08:50:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # return 0 00:06:07.995 08:50:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # '[' -n 2179468 ']' 00:06:07.995 08:50:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@337 -- # killprocess 2179468 00:06:07.995 08:50:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2179468 ']' 00:06:07.995 08:50:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2179468 00:06:07.995 08:50:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:06:07.995 08:50:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:07.995 08:50:23 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2179468 00:06:08.254 08:50:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:08.254 08:50:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:08.254 08:50:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2179468' 00:06:08.254 killing process with pid 2179468 00:06:08.254 08:50:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2179468 00:06:08.254 08:50:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2179468 00:06:08.254 [2024-11-20 08:50:24.196520] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:08.254 08:50:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:06:08.254 08:50:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # nvmf_fini 00:06:08.254 08:50:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@264 -- # local dev 00:06:08.254 08:50:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@267 -- # remove_target_ns 00:06:08.254 08:50:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:06:08.254 08:50:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:06:08.254 08:50:24 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:06:10.792 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@268 -- # delete_main_bridge 00:06:10.792 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:06:10.792 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@130 -- # return 0 00:06:10.792 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:06:10.792 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:06:10.792 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:06:10.792 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:06:10.792 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:06:10.792 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:06:10.792 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:06:10.792 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:06:10.792 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:06:10.792 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:06:10.792 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:06:10.792 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:06:10.792 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:06:10.792 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:06:10.792 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:06:10.792 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:06:10.792 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:06:10.792 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@41 -- # _dev=0 00:06:10.792 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@41 -- # dev_map=() 00:06:10.792 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@284 -- # iptr 00:06:10.792 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@542 -- # iptables-save 00:06:10.792 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:06:10.792 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@542 -- # iptables-restore 00:06:10.792 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:10.792 00:06:10.792 real 0m12.358s 00:06:10.792 user 0m18.723s 00:06:10.792 sys 0m5.603s 00:06:10.792 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:10.793 ************************************ 00:06:10.793 END TEST nvmf_host_management 00:06:10.793 ************************************ 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:10.793 ************************************ 00:06:10.793 START TEST nvmf_lvol 00:06:10.793 ************************************ 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:10.793 * Looking for test storage... 00:06:10.793 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:10.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.793 --rc genhtml_branch_coverage=1 00:06:10.793 --rc genhtml_function_coverage=1 00:06:10.793 --rc genhtml_legend=1 00:06:10.793 --rc geninfo_all_blocks=1 00:06:10.793 --rc geninfo_unexecuted_blocks=1 00:06:10.793 00:06:10.793 ' 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:10.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.793 --rc genhtml_branch_coverage=1 00:06:10.793 --rc genhtml_function_coverage=1 00:06:10.793 --rc genhtml_legend=1 00:06:10.793 --rc geninfo_all_blocks=1 00:06:10.793 --rc geninfo_unexecuted_blocks=1 00:06:10.793 00:06:10.793 ' 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:10.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.793 --rc genhtml_branch_coverage=1 00:06:10.793 --rc genhtml_function_coverage=1 00:06:10.793 --rc genhtml_legend=1 00:06:10.793 --rc geninfo_all_blocks=1 00:06:10.793 --rc geninfo_unexecuted_blocks=1 00:06:10.793 00:06:10.793 ' 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:10.793 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:10.793 --rc genhtml_branch_coverage=1 00:06:10.793 --rc genhtml_function_coverage=1 00:06:10.793 --rc genhtml_legend=1 00:06:10.793 --rc geninfo_all_blocks=1 00:06:10.793 --rc geninfo_unexecuted_blocks=1 00:06:10.793 00:06:10.793 ' 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.793 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@50 -- # : 0 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:06:10.794 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@54 -- # have_pci_nics=0 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # prepare_net_devs 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # local -g is_hw=no 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # remove_target_ns 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # xtrace_disable 00:06:10.794 08:50:26 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@131 -- # pci_devs=() 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@131 -- # local -a pci_devs 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@132 -- # pci_net_devs=() 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@133 -- # pci_drivers=() 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@133 -- # local -A pci_drivers 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@135 -- # net_devs=() 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@135 -- # local -ga net_devs 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@136 -- # e810=() 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@136 -- # local -ga e810 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@137 -- # x722=() 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@137 -- # local -ga x722 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@138 -- # mlx=() 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@138 -- # local -ga mlx 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:17.361 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:17.361 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:17.361 Found net devices under 0000:86:00.0: cvl_0_0 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:17.361 Found net devices under 0000:86:00.1: cvl_0_1 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # is_hw=yes 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:06:17.361 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@257 -- # create_target_ns 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@27 -- # local -gA dev_map 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@28 -- # local -g _dev 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # ips=() 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772161 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:06:17.362 10.0.0.1 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772162 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:06:17.362 10.0.0.2 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:06:17.362 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@38 -- # ping_ips 1 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=initiator0 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:06:17.363 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:17.363 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.491 ms 00:06:17.363 00:06:17.363 --- 10.0.0.1 ping statistics --- 00:06:17.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:17.363 rtt min/avg/max/mdev = 0.491/0.491/0.491/0.000 ms 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target0 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target0 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:06:17.363 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:17.363 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.059 ms 00:06:17.363 00:06:17.363 --- 10.0.0.2 ping statistics --- 00:06:17.363 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:17.363 rtt min/avg/max/mdev = 0.059/0.059/0.059/0.000 ms 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair++ )) 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # return 0 00:06:17.363 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=initiator0 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=initiator1 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # return 1 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev= 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@169 -- # return 0 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target0 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target0 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target1 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target1 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # return 1 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev= 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@169 -- # return 0 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:06:17.364 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:06:17.365 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:17.365 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:06:17.365 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:17.365 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:17.365 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # nvmfpid=2183741 00:06:17.365 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # waitforlisten 2183741 00:06:17.365 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:17.365 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2183741 ']' 00:06:17.365 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:17.365 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:17.365 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:17.365 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:17.365 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:17.365 08:50:32 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:17.365 [2024-11-20 08:50:32.810689] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:06:17.365 [2024-11-20 08:50:32.810740] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:17.365 [2024-11-20 08:50:32.894558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:17.365 [2024-11-20 08:50:32.935777] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:17.365 [2024-11-20 08:50:32.935814] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:17.365 [2024-11-20 08:50:32.935821] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:17.365 [2024-11-20 08:50:32.935827] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:17.365 [2024-11-20 08:50:32.935832] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:17.365 [2024-11-20 08:50:32.937243] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:17.365 [2024-11-20 08:50:32.937350] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.365 [2024-11-20 08:50:32.937352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:06:17.365 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:17.365 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:06:17.365 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:06:17.365 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:17.365 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:17.365 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:17.365 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:17.365 [2024-11-20 08:50:33.250742] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:17.365 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:17.622 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:17.622 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:17.879 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:17.879 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:17.879 08:50:33 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:18.137 08:50:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=3ec66129-f4b4-4ac3-b481-aa3a51070d86 00:06:18.137 08:50:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3ec66129-f4b4-4ac3-b481-aa3a51070d86 lvol 20 00:06:18.394 08:50:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=cdfbd95e-c694-4db5-a4e2-62fbd02544d1 00:06:18.394 08:50:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:18.649 08:50:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cdfbd95e-c694-4db5-a4e2-62fbd02544d1 00:06:18.906 08:50:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:18.906 [2024-11-20 08:50:34.922267] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:18.906 08:50:34 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:19.163 08:50:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2184228 00:06:19.163 08:50:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:19.163 08:50:35 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:20.532 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot cdfbd95e-c694-4db5-a4e2-62fbd02544d1 MY_SNAPSHOT 00:06:20.532 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=a7e90187-d1b4-4a8d-87c1-d7ad11f028d6 00:06:20.532 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize cdfbd95e-c694-4db5-a4e2-62fbd02544d1 30 00:06:20.788 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone a7e90187-d1b4-4a8d-87c1-d7ad11f028d6 MY_CLONE 00:06:21.045 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=75c4c6b2-5db0-4c5a-a0f2-7b46ba93d1db 00:06:21.045 08:50:36 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 75c4c6b2-5db0-4c5a-a0f2-7b46ba93d1db 00:06:21.609 08:50:37 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2184228 00:06:29.718 Initializing NVMe Controllers 00:06:29.718 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:29.718 Controller IO queue size 128, less than required. 00:06:29.718 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:29.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:06:29.718 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:06:29.718 Initialization complete. Launching workers. 00:06:29.719 ======================================================== 00:06:29.719 Latency(us) 00:06:29.719 Device Information : IOPS MiB/s Average min max 00:06:29.719 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12072.90 47.16 10608.68 2066.79 65001.15 00:06:29.719 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12168.10 47.53 10522.88 1227.30 72391.72 00:06:29.719 ======================================================== 00:06:29.719 Total : 24241.00 94.69 10565.61 1227.30 72391.72 00:06:29.719 00:06:29.719 08:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:29.719 08:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cdfbd95e-c694-4db5-a4e2-62fbd02544d1 00:06:29.976 08:50:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3ec66129-f4b4-4ac3-b481-aa3a51070d86 00:06:30.234 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:06:30.234 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:06:30.234 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:06:30.234 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # nvmfcleanup 00:06:30.234 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@99 -- # sync 00:06:30.234 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:06:30.234 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # set +e 00:06:30.234 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # for i in {1..20} 00:06:30.234 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:06:30.234 rmmod nvme_tcp 00:06:30.234 rmmod nvme_fabrics 00:06:30.234 rmmod nvme_keyring 00:06:30.234 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:06:30.234 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # set -e 00:06:30.234 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # return 0 00:06:30.234 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # '[' -n 2183741 ']' 00:06:30.234 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@337 -- # killprocess 2183741 00:06:30.234 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2183741 ']' 00:06:30.234 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2183741 00:06:30.234 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:06:30.234 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:30.234 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2183741 00:06:30.234 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:30.234 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:30.234 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2183741' 00:06:30.234 killing process with pid 2183741 00:06:30.234 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2183741 00:06:30.234 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2183741 00:06:30.493 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:06:30.493 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # nvmf_fini 00:06:30.493 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@264 -- # local dev 00:06:30.493 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@267 -- # remove_target_ns 00:06:30.493 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:06:30.493 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:06:30.493 08:50:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:06:33.023 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@268 -- # delete_main_bridge 00:06:33.023 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:06:33.023 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@130 -- # return 0 00:06:33.023 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:06:33.023 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:06:33.023 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:06:33.023 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:06:33.023 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:06:33.023 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:06:33.023 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:06:33.023 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:06:33.023 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:06:33.023 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:06:33.023 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:06:33.023 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:06:33.023 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:06:33.023 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:06:33.023 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:06:33.023 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:06:33.023 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:06:33.023 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@41 -- # _dev=0 00:06:33.023 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@41 -- # dev_map=() 00:06:33.023 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@284 -- # iptr 00:06:33.023 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@542 -- # iptables-save 00:06:33.023 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:06:33.023 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@542 -- # iptables-restore 00:06:33.023 00:06:33.023 real 0m22.136s 00:06:33.023 user 1m3.172s 00:06:33.023 sys 0m7.845s 00:06:33.023 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:33.023 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:33.023 ************************************ 00:06:33.023 END TEST nvmf_lvol 00:06:33.023 ************************************ 00:06:33.023 08:50:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:33.023 08:50:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:33.023 08:50:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:33.024 ************************************ 00:06:33.024 START TEST nvmf_lvs_grow 00:06:33.024 ************************************ 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:06:33.024 * Looking for test storage... 00:06:33.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:33.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.024 --rc genhtml_branch_coverage=1 00:06:33.024 --rc genhtml_function_coverage=1 00:06:33.024 --rc genhtml_legend=1 00:06:33.024 --rc geninfo_all_blocks=1 00:06:33.024 --rc geninfo_unexecuted_blocks=1 00:06:33.024 00:06:33.024 ' 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:33.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.024 --rc genhtml_branch_coverage=1 00:06:33.024 --rc genhtml_function_coverage=1 00:06:33.024 --rc genhtml_legend=1 00:06:33.024 --rc geninfo_all_blocks=1 00:06:33.024 --rc geninfo_unexecuted_blocks=1 00:06:33.024 00:06:33.024 ' 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:33.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.024 --rc genhtml_branch_coverage=1 00:06:33.024 --rc genhtml_function_coverage=1 00:06:33.024 --rc genhtml_legend=1 00:06:33.024 --rc geninfo_all_blocks=1 00:06:33.024 --rc geninfo_unexecuted_blocks=1 00:06:33.024 00:06:33.024 ' 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:33.024 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:33.024 --rc genhtml_branch_coverage=1 00:06:33.024 --rc genhtml_function_coverage=1 00:06:33.024 --rc genhtml_legend=1 00:06:33.024 --rc geninfo_all_blocks=1 00:06:33.024 --rc geninfo_unexecuted_blocks=1 00:06:33.024 00:06:33.024 ' 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.024 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@50 -- # : 0 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:06:33.025 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@54 -- # have_pci_nics=0 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # prepare_net_devs 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # local -g is_hw=no 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # remove_target_ns 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # xtrace_disable 00:06:33.025 08:50:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:39.598 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:39.598 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@131 -- # pci_devs=() 00:06:39.598 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@131 -- # local -a pci_devs 00:06:39.598 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@132 -- # pci_net_devs=() 00:06:39.598 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:06:39.598 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@133 -- # pci_drivers=() 00:06:39.598 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@133 -- # local -A pci_drivers 00:06:39.598 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@135 -- # net_devs=() 00:06:39.598 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@135 -- # local -ga net_devs 00:06:39.598 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@136 -- # e810=() 00:06:39.598 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@136 -- # local -ga e810 00:06:39.598 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@137 -- # x722=() 00:06:39.598 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@137 -- # local -ga x722 00:06:39.598 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@138 -- # mlx=() 00:06:39.598 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@138 -- # local -ga mlx 00:06:39.598 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:39.599 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:39.599 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:39.599 Found net devices under 0000:86:00.0: cvl_0_0 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:39.599 Found net devices under 0000:86:00.1: cvl_0_1 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # is_hw=yes 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@257 -- # create_target_ns 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@27 -- # local -gA dev_map 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@28 -- # local -g _dev 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # ips=() 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:06:39.599 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772161 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:06:39.600 10.0.0.1 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772162 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:06:39.600 10.0.0.2 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@38 -- # ping_ips 1 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=initiator0 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:06:39.600 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:39.600 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.497 ms 00:06:39.600 00:06:39.600 --- 10.0.0.1 ping statistics --- 00:06:39.600 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.600 rtt min/avg/max/mdev = 0.497/0.497/0.497/0.000 ms 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:39.600 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target0 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target0 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:06:39.601 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:39.601 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:06:39.601 00:06:39.601 --- 10.0.0.2 ping statistics --- 00:06:39.601 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:39.601 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair++ )) 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # return 0 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=initiator0 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=initiator1 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # return 1 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev= 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@169 -- # return 0 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target0 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target0 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target1 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target1 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # return 1 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev= 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@169 -- # return 0 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:06:39.601 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:39.602 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:06:39.602 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:06:39.602 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:06:39.602 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:06:39.602 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:39.602 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:39.602 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # nvmfpid=2189634 00:06:39.602 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:06:39.602 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # waitforlisten 2189634 00:06:39.602 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2189634 ']' 00:06:39.602 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:39.602 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:39.602 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:39.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:39.602 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:39.602 08:50:54 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:39.602 [2024-11-20 08:50:54.987921] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:06:39.602 [2024-11-20 08:50:54.988001] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:39.602 [2024-11-20 08:50:55.068502] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:39.602 [2024-11-20 08:50:55.109824] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:39.602 [2024-11-20 08:50:55.109859] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:39.602 [2024-11-20 08:50:55.109867] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:39.602 [2024-11-20 08:50:55.109873] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:39.602 [2024-11-20 08:50:55.109878] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:39.602 [2024-11-20 08:50:55.110444] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:39.602 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.602 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:06:39.602 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:06:39.602 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:39.602 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:39.602 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:39.602 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:39.602 [2024-11-20 08:50:55.413999] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:39.602 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:06:39.602 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:06:39.602 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:39.602 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:39.602 ************************************ 00:06:39.602 START TEST lvs_grow_clean 00:06:39.602 ************************************ 00:06:39.602 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:06:39.602 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:39.602 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:39.602 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:39.602 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:39.602 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:39.602 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:39.602 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:39.602 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:39.602 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:39.861 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:39.861 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:39.861 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=c5fea95e-ff70-4491-9cf7-7c56ba4c601f 00:06:39.861 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5fea95e-ff70-4491-9cf7-7c56ba4c601f 00:06:39.861 08:50:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:40.119 08:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:40.119 08:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:40.120 08:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u c5fea95e-ff70-4491-9cf7-7c56ba4c601f lvol 150 00:06:40.379 08:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=45f4d132-0531-4acb-be70-77d634840d9d 00:06:40.379 08:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:40.379 08:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:40.379 [2024-11-20 08:50:56.416876] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:40.379 [2024-11-20 08:50:56.416927] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:40.638 true 00:06:40.638 08:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5fea95e-ff70-4491-9cf7-7c56ba4c601f 00:06:40.638 08:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:40.638 08:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:40.638 08:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:40.896 08:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 45f4d132-0531-4acb-be70-77d634840d9d 00:06:41.155 08:50:56 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:41.155 [2024-11-20 08:50:57.155129] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:41.155 08:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:41.413 08:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:41.413 08:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2190134 00:06:41.413 08:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:41.413 08:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2190134 /var/tmp/bdevperf.sock 00:06:41.413 08:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2190134 ']' 00:06:41.413 08:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:41.413 08:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:41.413 08:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:41.413 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:41.413 08:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:41.413 08:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:41.413 [2024-11-20 08:50:57.375608] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:06:41.413 [2024-11-20 08:50:57.375652] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2190134 ] 00:06:41.413 [2024-11-20 08:50:57.449214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.671 [2024-11-20 08:50:57.490219] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.671 08:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.671 08:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:06:41.671 08:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:41.929 Nvme0n1 00:06:41.929 08:50:57 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:42.187 [ 00:06:42.187 { 00:06:42.187 "name": "Nvme0n1", 00:06:42.187 "aliases": [ 00:06:42.187 "45f4d132-0531-4acb-be70-77d634840d9d" 00:06:42.187 ], 00:06:42.187 "product_name": "NVMe disk", 00:06:42.187 "block_size": 4096, 00:06:42.187 "num_blocks": 38912, 00:06:42.187 "uuid": "45f4d132-0531-4acb-be70-77d634840d9d", 00:06:42.187 "numa_id": 1, 00:06:42.187 "assigned_rate_limits": { 00:06:42.187 "rw_ios_per_sec": 0, 00:06:42.187 "rw_mbytes_per_sec": 0, 00:06:42.187 "r_mbytes_per_sec": 0, 00:06:42.187 "w_mbytes_per_sec": 0 00:06:42.187 }, 00:06:42.187 "claimed": false, 00:06:42.187 "zoned": false, 00:06:42.187 "supported_io_types": { 00:06:42.187 "read": true, 00:06:42.187 "write": true, 00:06:42.187 "unmap": true, 00:06:42.187 "flush": true, 00:06:42.187 "reset": true, 00:06:42.187 "nvme_admin": true, 00:06:42.187 "nvme_io": true, 00:06:42.187 "nvme_io_md": false, 00:06:42.187 "write_zeroes": true, 00:06:42.187 "zcopy": false, 00:06:42.187 "get_zone_info": false, 00:06:42.187 "zone_management": false, 00:06:42.187 "zone_append": false, 00:06:42.187 "compare": true, 00:06:42.187 "compare_and_write": true, 00:06:42.187 "abort": true, 00:06:42.187 "seek_hole": false, 00:06:42.188 "seek_data": false, 00:06:42.188 "copy": true, 00:06:42.188 "nvme_iov_md": false 00:06:42.188 }, 00:06:42.188 "memory_domains": [ 00:06:42.188 { 00:06:42.188 "dma_device_id": "system", 00:06:42.188 "dma_device_type": 1 00:06:42.188 } 00:06:42.188 ], 00:06:42.188 "driver_specific": { 00:06:42.188 "nvme": [ 00:06:42.188 { 00:06:42.188 "trid": { 00:06:42.188 "trtype": "TCP", 00:06:42.188 "adrfam": "IPv4", 00:06:42.188 "traddr": "10.0.0.2", 00:06:42.188 "trsvcid": "4420", 00:06:42.188 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:42.188 }, 00:06:42.188 "ctrlr_data": { 00:06:42.188 "cntlid": 1, 00:06:42.188 "vendor_id": "0x8086", 00:06:42.188 "model_number": "SPDK bdev Controller", 00:06:42.188 "serial_number": "SPDK0", 00:06:42.188 "firmware_revision": "25.01", 00:06:42.188 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:42.188 "oacs": { 00:06:42.188 "security": 0, 00:06:42.188 "format": 0, 00:06:42.188 "firmware": 0, 00:06:42.188 "ns_manage": 0 00:06:42.188 }, 00:06:42.188 "multi_ctrlr": true, 00:06:42.188 "ana_reporting": false 00:06:42.188 }, 00:06:42.188 "vs": { 00:06:42.188 "nvme_version": "1.3" 00:06:42.188 }, 00:06:42.188 "ns_data": { 00:06:42.188 "id": 1, 00:06:42.188 "can_share": true 00:06:42.188 } 00:06:42.188 } 00:06:42.188 ], 00:06:42.188 "mp_policy": "active_passive" 00:06:42.188 } 00:06:42.188 } 00:06:42.188 ] 00:06:42.188 08:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2190306 00:06:42.188 08:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:42.188 08:50:58 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:42.188 Running I/O for 10 seconds... 00:06:43.562 Latency(us) 00:06:43.562 [2024-11-20T07:50:59.603Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:43.562 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:43.562 Nvme0n1 : 1.00 22685.00 88.61 0.00 0.00 0.00 0.00 0.00 00:06:43.562 [2024-11-20T07:50:59.603Z] =================================================================================================================== 00:06:43.562 [2024-11-20T07:50:59.603Z] Total : 22685.00 88.61 0.00 0.00 0.00 0.00 0.00 00:06:43.562 00:06:44.129 08:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u c5fea95e-ff70-4491-9cf7-7c56ba4c601f 00:06:44.390 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:44.390 Nvme0n1 : 2.00 22780.50 88.99 0.00 0.00 0.00 0.00 0.00 00:06:44.390 [2024-11-20T07:51:00.431Z] =================================================================================================================== 00:06:44.390 [2024-11-20T07:51:00.431Z] Total : 22780.50 88.99 0.00 0.00 0.00 0.00 0.00 00:06:44.390 00:06:44.390 true 00:06:44.390 08:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5fea95e-ff70-4491-9cf7-7c56ba4c601f 00:06:44.390 08:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:06:44.651 08:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:06:44.651 08:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:06:44.651 08:51:00 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2190306 00:06:45.220 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:45.220 Nvme0n1 : 3.00 22764.67 88.92 0.00 0.00 0.00 0.00 0.00 00:06:45.220 [2024-11-20T07:51:01.261Z] =================================================================================================================== 00:06:45.220 [2024-11-20T07:51:01.261Z] Total : 22764.67 88.92 0.00 0.00 0.00 0.00 0.00 00:06:45.220 00:06:46.154 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:46.154 Nvme0n1 : 4.00 22836.50 89.21 0.00 0.00 0.00 0.00 0.00 00:06:46.154 [2024-11-20T07:51:02.195Z] =================================================================================================================== 00:06:46.154 [2024-11-20T07:51:02.195Z] Total : 22836.50 89.21 0.00 0.00 0.00 0.00 0.00 00:06:46.154 00:06:47.530 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:47.530 Nvme0n1 : 5.00 22893.60 89.43 0.00 0.00 0.00 0.00 0.00 00:06:47.530 [2024-11-20T07:51:03.571Z] =================================================================================================================== 00:06:47.530 [2024-11-20T07:51:03.571Z] Total : 22893.60 89.43 0.00 0.00 0.00 0.00 0.00 00:06:47.530 00:06:48.464 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:48.464 Nvme0n1 : 6.00 22935.50 89.59 0.00 0.00 0.00 0.00 0.00 00:06:48.464 [2024-11-20T07:51:04.505Z] =================================================================================================================== 00:06:48.464 [2024-11-20T07:51:04.505Z] Total : 22935.50 89.59 0.00 0.00 0.00 0.00 0.00 00:06:48.464 00:06:49.402 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:49.402 Nvme0n1 : 7.00 22958.14 89.68 0.00 0.00 0.00 0.00 0.00 00:06:49.402 [2024-11-20T07:51:05.443Z] =================================================================================================================== 00:06:49.402 [2024-11-20T07:51:05.443Z] Total : 22958.14 89.68 0.00 0.00 0.00 0.00 0.00 00:06:49.402 00:06:50.339 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:50.339 Nvme0n1 : 8.00 22985.88 89.79 0.00 0.00 0.00 0.00 0.00 00:06:50.339 [2024-11-20T07:51:06.380Z] =================================================================================================================== 00:06:50.339 [2024-11-20T07:51:06.380Z] Total : 22985.88 89.79 0.00 0.00 0.00 0.00 0.00 00:06:50.339 00:06:51.275 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:51.275 Nvme0n1 : 9.00 23014.22 89.90 0.00 0.00 0.00 0.00 0.00 00:06:51.275 [2024-11-20T07:51:07.316Z] =================================================================================================================== 00:06:51.275 [2024-11-20T07:51:07.316Z] Total : 23014.22 89.90 0.00 0.00 0.00 0.00 0.00 00:06:51.275 00:06:52.211 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:52.211 Nvme0n1 : 10.00 23022.00 89.93 0.00 0.00 0.00 0.00 0.00 00:06:52.211 [2024-11-20T07:51:08.252Z] =================================================================================================================== 00:06:52.211 [2024-11-20T07:51:08.252Z] Total : 23022.00 89.93 0.00 0.00 0.00 0.00 0.00 00:06:52.211 00:06:52.211 00:06:52.211 Latency(us) 00:06:52.211 [2024-11-20T07:51:08.252Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:52.211 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:52.211 Nvme0n1 : 10.00 23017.81 89.91 0.00 0.00 5557.49 3191.32 10086.85 00:06:52.211 [2024-11-20T07:51:08.252Z] =================================================================================================================== 00:06:52.211 [2024-11-20T07:51:08.252Z] Total : 23017.81 89.91 0.00 0.00 5557.49 3191.32 10086.85 00:06:52.211 { 00:06:52.211 "results": [ 00:06:52.211 { 00:06:52.211 "job": "Nvme0n1", 00:06:52.211 "core_mask": "0x2", 00:06:52.211 "workload": "randwrite", 00:06:52.211 "status": "finished", 00:06:52.211 "queue_depth": 128, 00:06:52.211 "io_size": 4096, 00:06:52.211 "runtime": 10.003125, 00:06:52.211 "iops": 23017.80693533271, 00:06:52.211 "mibps": 89.91330834114339, 00:06:52.212 "io_failed": 0, 00:06:52.212 "io_timeout": 0, 00:06:52.212 "avg_latency_us": 5557.491410644385, 00:06:52.212 "min_latency_us": 3191.318260869565, 00:06:52.212 "max_latency_us": 10086.845217391305 00:06:52.212 } 00:06:52.212 ], 00:06:52.212 "core_count": 1 00:06:52.212 } 00:06:52.212 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2190134 00:06:52.212 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2190134 ']' 00:06:52.212 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2190134 00:06:52.212 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:06:52.212 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:52.212 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2190134 00:06:52.471 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:52.471 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:52.471 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2190134' 00:06:52.471 killing process with pid 2190134 00:06:52.471 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2190134 00:06:52.471 Received shutdown signal, test time was about 10.000000 seconds 00:06:52.471 00:06:52.471 Latency(us) 00:06:52.471 [2024-11-20T07:51:08.512Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:52.471 [2024-11-20T07:51:08.512Z] =================================================================================================================== 00:06:52.471 [2024-11-20T07:51:08.512Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:06:52.471 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2190134 00:06:52.471 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:52.729 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:52.988 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5fea95e-ff70-4491-9cf7-7c56ba4c601f 00:06:52.988 08:51:08 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:06:52.988 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:06:52.988 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:06:52.988 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:53.247 [2024-11-20 08:51:09.189620] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:06:53.247 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5fea95e-ff70-4491-9cf7-7c56ba4c601f 00:06:53.247 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:06:53.247 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5fea95e-ff70-4491-9cf7-7c56ba4c601f 00:06:53.247 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:53.247 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:53.247 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:53.247 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:53.247 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:53.247 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:53.247 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:53.247 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:53.247 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5fea95e-ff70-4491-9cf7-7c56ba4c601f 00:06:53.506 request: 00:06:53.506 { 00:06:53.506 "uuid": "c5fea95e-ff70-4491-9cf7-7c56ba4c601f", 00:06:53.506 "method": "bdev_lvol_get_lvstores", 00:06:53.506 "req_id": 1 00:06:53.506 } 00:06:53.506 Got JSON-RPC error response 00:06:53.506 response: 00:06:53.506 { 00:06:53.506 "code": -19, 00:06:53.506 "message": "No such device" 00:06:53.506 } 00:06:53.506 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:06:53.506 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:53.506 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:53.506 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:53.506 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:53.765 aio_bdev 00:06:53.765 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 45f4d132-0531-4acb-be70-77d634840d9d 00:06:53.765 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=45f4d132-0531-4acb-be70-77d634840d9d 00:06:53.765 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:06:53.765 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:06:53.765 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:06:53.765 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:06:53.765 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:06:53.765 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 45f4d132-0531-4acb-be70-77d634840d9d -t 2000 00:06:54.024 [ 00:06:54.024 { 00:06:54.024 "name": "45f4d132-0531-4acb-be70-77d634840d9d", 00:06:54.024 "aliases": [ 00:06:54.024 "lvs/lvol" 00:06:54.024 ], 00:06:54.024 "product_name": "Logical Volume", 00:06:54.024 "block_size": 4096, 00:06:54.024 "num_blocks": 38912, 00:06:54.024 "uuid": "45f4d132-0531-4acb-be70-77d634840d9d", 00:06:54.024 "assigned_rate_limits": { 00:06:54.024 "rw_ios_per_sec": 0, 00:06:54.024 "rw_mbytes_per_sec": 0, 00:06:54.024 "r_mbytes_per_sec": 0, 00:06:54.024 "w_mbytes_per_sec": 0 00:06:54.024 }, 00:06:54.024 "claimed": false, 00:06:54.024 "zoned": false, 00:06:54.024 "supported_io_types": { 00:06:54.024 "read": true, 00:06:54.024 "write": true, 00:06:54.024 "unmap": true, 00:06:54.024 "flush": false, 00:06:54.024 "reset": true, 00:06:54.024 "nvme_admin": false, 00:06:54.024 "nvme_io": false, 00:06:54.024 "nvme_io_md": false, 00:06:54.024 "write_zeroes": true, 00:06:54.024 "zcopy": false, 00:06:54.024 "get_zone_info": false, 00:06:54.024 "zone_management": false, 00:06:54.024 "zone_append": false, 00:06:54.024 "compare": false, 00:06:54.024 "compare_and_write": false, 00:06:54.024 "abort": false, 00:06:54.024 "seek_hole": true, 00:06:54.024 "seek_data": true, 00:06:54.024 "copy": false, 00:06:54.024 "nvme_iov_md": false 00:06:54.024 }, 00:06:54.024 "driver_specific": { 00:06:54.024 "lvol": { 00:06:54.024 "lvol_store_uuid": "c5fea95e-ff70-4491-9cf7-7c56ba4c601f", 00:06:54.024 "base_bdev": "aio_bdev", 00:06:54.024 "thin_provision": false, 00:06:54.024 "num_allocated_clusters": 38, 00:06:54.024 "snapshot": false, 00:06:54.024 "clone": false, 00:06:54.024 "esnap_clone": false 00:06:54.024 } 00:06:54.024 } 00:06:54.024 } 00:06:54.024 ] 00:06:54.024 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:06:54.024 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5fea95e-ff70-4491-9cf7-7c56ba4c601f 00:06:54.024 08:51:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:06:54.283 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:06:54.283 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u c5fea95e-ff70-4491-9cf7-7c56ba4c601f 00:06:54.283 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:06:54.541 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:06:54.541 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 45f4d132-0531-4acb-be70-77d634840d9d 00:06:54.541 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u c5fea95e-ff70-4491-9cf7-7c56ba4c601f 00:06:54.800 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:06:55.059 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:55.059 00:06:55.059 real 0m15.524s 00:06:55.059 user 0m15.171s 00:06:55.059 sys 0m1.412s 00:06:55.059 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:55.059 08:51:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:06:55.059 ************************************ 00:06:55.059 END TEST lvs_grow_clean 00:06:55.059 ************************************ 00:06:55.059 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:06:55.059 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:55.059 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:55.059 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:06:55.059 ************************************ 00:06:55.059 START TEST lvs_grow_dirty 00:06:55.059 ************************************ 00:06:55.059 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:06:55.059 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:06:55.059 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:06:55.059 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:06:55.059 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:06:55.059 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:06:55.059 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:06:55.059 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:55.059 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:55.059 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:06:55.344 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:06:55.344 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:06:55.664 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=e812937b-bf37-4a6f-be97-b63e8c6d22fb 00:06:55.664 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e812937b-bf37-4a6f-be97-b63e8c6d22fb 00:06:55.664 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:06:55.664 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:06:55.664 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:06:55.664 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e812937b-bf37-4a6f-be97-b63e8c6d22fb lvol 150 00:06:55.923 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=286e6ba7-2844-4238-8aa4-c16aef6ce0f3 00:06:55.923 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:06:55.923 08:51:11 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:06:56.181 [2024-11-20 08:51:12.031936] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:06:56.181 [2024-11-20 08:51:12.031988] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:06:56.181 true 00:06:56.181 08:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e812937b-bf37-4a6f-be97-b63e8c6d22fb 00:06:56.181 08:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:06:56.439 08:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:06:56.439 08:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:56.439 08:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 286e6ba7-2844-4238-8aa4-c16aef6ce0f3 00:06:56.698 08:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:56.957 [2024-11-20 08:51:12.794212] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:56.957 08:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:57.215 08:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2193269 00:06:57.215 08:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:57.215 08:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:06:57.216 08:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2193269 /var/tmp/bdevperf.sock 00:06:57.216 08:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2193269 ']' 00:06:57.216 08:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:57.216 08:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:57.216 08:51:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:57.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:57.216 08:51:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:57.216 08:51:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:06:57.216 [2024-11-20 08:51:13.042824] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:06:57.216 [2024-11-20 08:51:13.042872] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2193269 ] 00:06:57.216 [2024-11-20 08:51:13.117347] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:57.216 [2024-11-20 08:51:13.159920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:57.216 08:51:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:57.216 08:51:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:06:57.216 08:51:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:06:57.782 Nvme0n1 00:06:57.782 08:51:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:06:57.782 [ 00:06:57.782 { 00:06:57.782 "name": "Nvme0n1", 00:06:57.782 "aliases": [ 00:06:57.782 "286e6ba7-2844-4238-8aa4-c16aef6ce0f3" 00:06:57.782 ], 00:06:57.782 "product_name": "NVMe disk", 00:06:57.782 "block_size": 4096, 00:06:57.782 "num_blocks": 38912, 00:06:57.782 "uuid": "286e6ba7-2844-4238-8aa4-c16aef6ce0f3", 00:06:57.782 "numa_id": 1, 00:06:57.782 "assigned_rate_limits": { 00:06:57.782 "rw_ios_per_sec": 0, 00:06:57.782 "rw_mbytes_per_sec": 0, 00:06:57.782 "r_mbytes_per_sec": 0, 00:06:57.782 "w_mbytes_per_sec": 0 00:06:57.782 }, 00:06:57.782 "claimed": false, 00:06:57.782 "zoned": false, 00:06:57.782 "supported_io_types": { 00:06:57.782 "read": true, 00:06:57.782 "write": true, 00:06:57.782 "unmap": true, 00:06:57.782 "flush": true, 00:06:57.782 "reset": true, 00:06:57.782 "nvme_admin": true, 00:06:57.782 "nvme_io": true, 00:06:57.782 "nvme_io_md": false, 00:06:57.782 "write_zeroes": true, 00:06:57.782 "zcopy": false, 00:06:57.782 "get_zone_info": false, 00:06:57.782 "zone_management": false, 00:06:57.782 "zone_append": false, 00:06:57.782 "compare": true, 00:06:57.782 "compare_and_write": true, 00:06:57.782 "abort": true, 00:06:57.782 "seek_hole": false, 00:06:57.782 "seek_data": false, 00:06:57.782 "copy": true, 00:06:57.782 "nvme_iov_md": false 00:06:57.782 }, 00:06:57.782 "memory_domains": [ 00:06:57.782 { 00:06:57.782 "dma_device_id": "system", 00:06:57.782 "dma_device_type": 1 00:06:57.782 } 00:06:57.782 ], 00:06:57.782 "driver_specific": { 00:06:57.782 "nvme": [ 00:06:57.782 { 00:06:57.782 "trid": { 00:06:57.782 "trtype": "TCP", 00:06:57.782 "adrfam": "IPv4", 00:06:57.782 "traddr": "10.0.0.2", 00:06:57.782 "trsvcid": "4420", 00:06:57.782 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:06:57.782 }, 00:06:57.782 "ctrlr_data": { 00:06:57.782 "cntlid": 1, 00:06:57.782 "vendor_id": "0x8086", 00:06:57.782 "model_number": "SPDK bdev Controller", 00:06:57.782 "serial_number": "SPDK0", 00:06:57.782 "firmware_revision": "25.01", 00:06:57.782 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:57.782 "oacs": { 00:06:57.782 "security": 0, 00:06:57.782 "format": 0, 00:06:57.782 "firmware": 0, 00:06:57.782 "ns_manage": 0 00:06:57.782 }, 00:06:57.782 "multi_ctrlr": true, 00:06:57.782 "ana_reporting": false 00:06:57.782 }, 00:06:57.782 "vs": { 00:06:57.782 "nvme_version": "1.3" 00:06:57.782 }, 00:06:57.782 "ns_data": { 00:06:57.782 "id": 1, 00:06:57.782 "can_share": true 00:06:57.782 } 00:06:57.782 } 00:06:57.782 ], 00:06:57.782 "mp_policy": "active_passive" 00:06:57.782 } 00:06:57.782 } 00:06:57.782 ] 00:06:57.782 08:51:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2193485 00:06:57.782 08:51:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:06:57.782 08:51:13 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:06:58.040 Running I/O for 10 seconds... 00:06:58.974 Latency(us) 00:06:58.974 [2024-11-20T07:51:15.015Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:58.974 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:58.974 Nvme0n1 : 1.00 22610.00 88.32 0.00 0.00 0.00 0.00 0.00 00:06:58.974 [2024-11-20T07:51:15.015Z] =================================================================================================================== 00:06:58.974 [2024-11-20T07:51:15.015Z] Total : 22610.00 88.32 0.00 0.00 0.00 0.00 0.00 00:06:58.974 00:06:59.908 08:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u e812937b-bf37-4a6f-be97-b63e8c6d22fb 00:06:59.908 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:06:59.908 Nvme0n1 : 2.00 22762.00 88.91 0.00 0.00 0.00 0.00 0.00 00:06:59.908 [2024-11-20T07:51:15.949Z] =================================================================================================================== 00:06:59.908 [2024-11-20T07:51:15.949Z] Total : 22762.00 88.91 0.00 0.00 0.00 0.00 0.00 00:06:59.908 00:07:00.166 true 00:07:00.166 08:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e812937b-bf37-4a6f-be97-b63e8c6d22fb 00:07:00.166 08:51:15 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:00.166 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:00.166 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:00.166 08:51:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2193485 00:07:01.100 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:01.100 Nvme0n1 : 3.00 22828.67 89.17 0.00 0.00 0.00 0.00 0.00 00:07:01.100 [2024-11-20T07:51:17.141Z] =================================================================================================================== 00:07:01.100 [2024-11-20T07:51:17.141Z] Total : 22828.67 89.17 0.00 0.00 0.00 0.00 0.00 00:07:01.100 00:07:02.032 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:02.032 Nvme0n1 : 4.00 22888.00 89.41 0.00 0.00 0.00 0.00 0.00 00:07:02.032 [2024-11-20T07:51:18.073Z] =================================================================================================================== 00:07:02.032 [2024-11-20T07:51:18.073Z] Total : 22888.00 89.41 0.00 0.00 0.00 0.00 0.00 00:07:02.032 00:07:02.965 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:02.965 Nvme0n1 : 5.00 22938.60 89.60 0.00 0.00 0.00 0.00 0.00 00:07:02.965 [2024-11-20T07:51:19.006Z] =================================================================================================================== 00:07:02.965 [2024-11-20T07:51:19.006Z] Total : 22938.60 89.60 0.00 0.00 0.00 0.00 0.00 00:07:02.965 00:07:03.898 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:03.898 Nvme0n1 : 6.00 22968.83 89.72 0.00 0.00 0.00 0.00 0.00 00:07:03.898 [2024-11-20T07:51:19.939Z] =================================================================================================================== 00:07:03.898 [2024-11-20T07:51:19.939Z] Total : 22968.83 89.72 0.00 0.00 0.00 0.00 0.00 00:07:03.898 00:07:04.832 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:04.832 Nvme0n1 : 7.00 22982.14 89.77 0.00 0.00 0.00 0.00 0.00 00:07:04.832 [2024-11-20T07:51:20.873Z] =================================================================================================================== 00:07:04.832 [2024-11-20T07:51:20.873Z] Total : 22982.14 89.77 0.00 0.00 0.00 0.00 0.00 00:07:04.832 00:07:06.205 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:06.205 Nvme0n1 : 8.00 23007.50 89.87 0.00 0.00 0.00 0.00 0.00 00:07:06.205 [2024-11-20T07:51:22.246Z] =================================================================================================================== 00:07:06.205 [2024-11-20T07:51:22.246Z] Total : 23007.50 89.87 0.00 0.00 0.00 0.00 0.00 00:07:06.205 00:07:07.139 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:07.139 Nvme0n1 : 9.00 23026.56 89.95 0.00 0.00 0.00 0.00 0.00 00:07:07.139 [2024-11-20T07:51:23.180Z] =================================================================================================================== 00:07:07.139 [2024-11-20T07:51:23.180Z] Total : 23026.56 89.95 0.00 0.00 0.00 0.00 0.00 00:07:07.139 00:07:08.071 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.071 Nvme0n1 : 10.00 23039.90 90.00 0.00 0.00 0.00 0.00 0.00 00:07:08.071 [2024-11-20T07:51:24.112Z] =================================================================================================================== 00:07:08.071 [2024-11-20T07:51:24.112Z] Total : 23039.90 90.00 0.00 0.00 0.00 0.00 0.00 00:07:08.071 00:07:08.071 00:07:08.071 Latency(us) 00:07:08.071 [2024-11-20T07:51:24.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:08.071 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:08.071 Nvme0n1 : 10.00 23043.45 90.01 0.00 0.00 5551.93 3276.80 12366.36 00:07:08.071 [2024-11-20T07:51:24.112Z] =================================================================================================================== 00:07:08.071 [2024-11-20T07:51:24.112Z] Total : 23043.45 90.01 0.00 0.00 5551.93 3276.80 12366.36 00:07:08.071 { 00:07:08.071 "results": [ 00:07:08.071 { 00:07:08.071 "job": "Nvme0n1", 00:07:08.071 "core_mask": "0x2", 00:07:08.071 "workload": "randwrite", 00:07:08.071 "status": "finished", 00:07:08.071 "queue_depth": 128, 00:07:08.071 "io_size": 4096, 00:07:08.071 "runtime": 10.004012, 00:07:08.071 "iops": 23043.454965867695, 00:07:08.071 "mibps": 90.01349596042068, 00:07:08.071 "io_failed": 0, 00:07:08.071 "io_timeout": 0, 00:07:08.071 "avg_latency_us": 5551.929882731835, 00:07:08.071 "min_latency_us": 3276.8, 00:07:08.071 "max_latency_us": 12366.358260869565 00:07:08.071 } 00:07:08.071 ], 00:07:08.071 "core_count": 1 00:07:08.071 } 00:07:08.071 08:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2193269 00:07:08.071 08:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2193269 ']' 00:07:08.071 08:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2193269 00:07:08.071 08:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:07:08.071 08:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:08.071 08:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2193269 00:07:08.071 08:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:08.071 08:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:08.071 08:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2193269' 00:07:08.071 killing process with pid 2193269 00:07:08.071 08:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2193269 00:07:08.071 Received shutdown signal, test time was about 10.000000 seconds 00:07:08.071 00:07:08.071 Latency(us) 00:07:08.071 [2024-11-20T07:51:24.112Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:08.071 [2024-11-20T07:51:24.112Z] =================================================================================================================== 00:07:08.071 [2024-11-20T07:51:24.112Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:08.071 08:51:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2193269 00:07:08.071 08:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:08.330 08:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:08.588 08:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e812937b-bf37-4a6f-be97-b63e8c6d22fb 00:07:08.588 08:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:08.846 08:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:08.846 08:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:08.846 08:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2189634 00:07:08.846 08:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2189634 00:07:08.846 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2189634 Killed "${NVMF_APP[@]}" "$@" 00:07:08.846 08:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:08.846 08:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:08.846 08:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:07:08.846 08:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:08.846 08:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:08.846 08:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@328 -- # nvmfpid=2195335 00:07:08.846 08:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@329 -- # waitforlisten 2195335 00:07:08.846 08:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:08.846 08:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2195335 ']' 00:07:08.846 08:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:08.846 08:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:08.846 08:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:08.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:08.846 08:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:08.846 08:51:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:08.846 [2024-11-20 08:51:24.831901] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:07:08.846 [2024-11-20 08:51:24.831955] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.105 [2024-11-20 08:51:24.907400] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.105 [2024-11-20 08:51:24.946616] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:09.105 [2024-11-20 08:51:24.946650] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:09.105 [2024-11-20 08:51:24.946657] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:09.105 [2024-11-20 08:51:24.946663] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:09.105 [2024-11-20 08:51:24.946668] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:09.105 [2024-11-20 08:51:24.947195] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.105 08:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:09.105 08:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:09.105 08:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:07:09.105 08:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:09.105 08:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:09.105 08:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:09.105 08:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:09.363 [2024-11-20 08:51:25.255773] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:09.363 [2024-11-20 08:51:25.255861] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:09.363 [2024-11-20 08:51:25.255887] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:09.363 08:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:09.364 08:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 286e6ba7-2844-4238-8aa4-c16aef6ce0f3 00:07:09.364 08:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=286e6ba7-2844-4238-8aa4-c16aef6ce0f3 00:07:09.364 08:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:09.364 08:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:09.364 08:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:09.364 08:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:09.364 08:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:09.622 08:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 286e6ba7-2844-4238-8aa4-c16aef6ce0f3 -t 2000 00:07:09.881 [ 00:07:09.881 { 00:07:09.881 "name": "286e6ba7-2844-4238-8aa4-c16aef6ce0f3", 00:07:09.881 "aliases": [ 00:07:09.881 "lvs/lvol" 00:07:09.881 ], 00:07:09.881 "product_name": "Logical Volume", 00:07:09.881 "block_size": 4096, 00:07:09.881 "num_blocks": 38912, 00:07:09.881 "uuid": "286e6ba7-2844-4238-8aa4-c16aef6ce0f3", 00:07:09.881 "assigned_rate_limits": { 00:07:09.881 "rw_ios_per_sec": 0, 00:07:09.881 "rw_mbytes_per_sec": 0, 00:07:09.881 "r_mbytes_per_sec": 0, 00:07:09.881 "w_mbytes_per_sec": 0 00:07:09.881 }, 00:07:09.881 "claimed": false, 00:07:09.881 "zoned": false, 00:07:09.881 "supported_io_types": { 00:07:09.881 "read": true, 00:07:09.881 "write": true, 00:07:09.881 "unmap": true, 00:07:09.881 "flush": false, 00:07:09.881 "reset": true, 00:07:09.881 "nvme_admin": false, 00:07:09.881 "nvme_io": false, 00:07:09.881 "nvme_io_md": false, 00:07:09.881 "write_zeroes": true, 00:07:09.881 "zcopy": false, 00:07:09.881 "get_zone_info": false, 00:07:09.881 "zone_management": false, 00:07:09.881 "zone_append": false, 00:07:09.881 "compare": false, 00:07:09.881 "compare_and_write": false, 00:07:09.881 "abort": false, 00:07:09.881 "seek_hole": true, 00:07:09.881 "seek_data": true, 00:07:09.881 "copy": false, 00:07:09.881 "nvme_iov_md": false 00:07:09.881 }, 00:07:09.881 "driver_specific": { 00:07:09.881 "lvol": { 00:07:09.881 "lvol_store_uuid": "e812937b-bf37-4a6f-be97-b63e8c6d22fb", 00:07:09.881 "base_bdev": "aio_bdev", 00:07:09.881 "thin_provision": false, 00:07:09.881 "num_allocated_clusters": 38, 00:07:09.881 "snapshot": false, 00:07:09.881 "clone": false, 00:07:09.881 "esnap_clone": false 00:07:09.881 } 00:07:09.881 } 00:07:09.881 } 00:07:09.881 ] 00:07:09.881 08:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:09.881 08:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e812937b-bf37-4a6f-be97-b63e8c6d22fb 00:07:09.881 08:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:09.881 08:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:09.881 08:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e812937b-bf37-4a6f-be97-b63e8c6d22fb 00:07:09.881 08:51:25 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:10.139 08:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:10.139 08:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:10.398 [2024-11-20 08:51:26.228617] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:10.398 08:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e812937b-bf37-4a6f-be97-b63e8c6d22fb 00:07:10.398 08:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:07:10.398 08:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e812937b-bf37-4a6f-be97-b63e8c6d22fb 00:07:10.398 08:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:10.398 08:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:10.398 08:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:10.398 08:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:10.398 08:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:10.398 08:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:10.398 08:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:10.398 08:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:10.398 08:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e812937b-bf37-4a6f-be97-b63e8c6d22fb 00:07:10.656 request: 00:07:10.656 { 00:07:10.656 "uuid": "e812937b-bf37-4a6f-be97-b63e8c6d22fb", 00:07:10.656 "method": "bdev_lvol_get_lvstores", 00:07:10.656 "req_id": 1 00:07:10.656 } 00:07:10.656 Got JSON-RPC error response 00:07:10.656 response: 00:07:10.656 { 00:07:10.656 "code": -19, 00:07:10.656 "message": "No such device" 00:07:10.656 } 00:07:10.656 08:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:07:10.656 08:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:10.656 08:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:10.656 08:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:10.656 08:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:10.656 aio_bdev 00:07:10.657 08:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 286e6ba7-2844-4238-8aa4-c16aef6ce0f3 00:07:10.657 08:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=286e6ba7-2844-4238-8aa4-c16aef6ce0f3 00:07:10.657 08:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:10.657 08:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:07:10.657 08:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:10.657 08:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:10.657 08:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:10.915 08:51:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 286e6ba7-2844-4238-8aa4-c16aef6ce0f3 -t 2000 00:07:11.174 [ 00:07:11.174 { 00:07:11.174 "name": "286e6ba7-2844-4238-8aa4-c16aef6ce0f3", 00:07:11.174 "aliases": [ 00:07:11.174 "lvs/lvol" 00:07:11.174 ], 00:07:11.174 "product_name": "Logical Volume", 00:07:11.174 "block_size": 4096, 00:07:11.174 "num_blocks": 38912, 00:07:11.174 "uuid": "286e6ba7-2844-4238-8aa4-c16aef6ce0f3", 00:07:11.174 "assigned_rate_limits": { 00:07:11.174 "rw_ios_per_sec": 0, 00:07:11.174 "rw_mbytes_per_sec": 0, 00:07:11.174 "r_mbytes_per_sec": 0, 00:07:11.174 "w_mbytes_per_sec": 0 00:07:11.174 }, 00:07:11.174 "claimed": false, 00:07:11.174 "zoned": false, 00:07:11.174 "supported_io_types": { 00:07:11.174 "read": true, 00:07:11.174 "write": true, 00:07:11.174 "unmap": true, 00:07:11.174 "flush": false, 00:07:11.174 "reset": true, 00:07:11.174 "nvme_admin": false, 00:07:11.174 "nvme_io": false, 00:07:11.174 "nvme_io_md": false, 00:07:11.174 "write_zeroes": true, 00:07:11.174 "zcopy": false, 00:07:11.174 "get_zone_info": false, 00:07:11.174 "zone_management": false, 00:07:11.174 "zone_append": false, 00:07:11.174 "compare": false, 00:07:11.174 "compare_and_write": false, 00:07:11.174 "abort": false, 00:07:11.174 "seek_hole": true, 00:07:11.174 "seek_data": true, 00:07:11.174 "copy": false, 00:07:11.174 "nvme_iov_md": false 00:07:11.174 }, 00:07:11.174 "driver_specific": { 00:07:11.174 "lvol": { 00:07:11.174 "lvol_store_uuid": "e812937b-bf37-4a6f-be97-b63e8c6d22fb", 00:07:11.174 "base_bdev": "aio_bdev", 00:07:11.174 "thin_provision": false, 00:07:11.174 "num_allocated_clusters": 38, 00:07:11.174 "snapshot": false, 00:07:11.174 "clone": false, 00:07:11.174 "esnap_clone": false 00:07:11.174 } 00:07:11.174 } 00:07:11.174 } 00:07:11.174 ] 00:07:11.174 08:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:07:11.174 08:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e812937b-bf37-4a6f-be97-b63e8c6d22fb 00:07:11.174 08:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:11.174 08:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:11.433 08:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:11.433 08:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u e812937b-bf37-4a6f-be97-b63e8c6d22fb 00:07:11.433 08:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:11.433 08:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 286e6ba7-2844-4238-8aa4-c16aef6ce0f3 00:07:11.692 08:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e812937b-bf37-4a6f-be97-b63e8c6d22fb 00:07:11.950 08:51:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:12.209 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:12.209 00:07:12.209 real 0m16.999s 00:07:12.209 user 0m43.587s 00:07:12.209 sys 0m3.938s 00:07:12.209 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:12.209 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:12.209 ************************************ 00:07:12.209 END TEST lvs_grow_dirty 00:07:12.209 ************************************ 00:07:12.209 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:12.209 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:07:12.209 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:07:12.209 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:07:12.209 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:12.209 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:07:12.209 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:07:12.209 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:07:12.209 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:12.209 nvmf_trace.0 00:07:12.209 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:07:12.209 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:12.209 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # nvmfcleanup 00:07:12.209 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@99 -- # sync 00:07:12.209 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:07:12.209 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # set +e 00:07:12.209 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # for i in {1..20} 00:07:12.209 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:07:12.209 rmmod nvme_tcp 00:07:12.209 rmmod nvme_fabrics 00:07:12.209 rmmod nvme_keyring 00:07:12.209 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:07:12.209 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # set -e 00:07:12.209 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # return 0 00:07:12.209 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # '[' -n 2195335 ']' 00:07:12.209 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@337 -- # killprocess 2195335 00:07:12.209 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2195335 ']' 00:07:12.209 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2195335 00:07:12.209 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:07:12.209 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.209 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2195335 00:07:12.468 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.468 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.468 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2195335' 00:07:12.468 killing process with pid 2195335 00:07:12.468 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2195335 00:07:12.468 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2195335 00:07:12.468 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:07:12.468 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # nvmf_fini 00:07:12.468 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@264 -- # local dev 00:07:12.468 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@267 -- # remove_target_ns 00:07:12.468 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:12.468 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:12.468 08:51:28 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@268 -- # delete_main_bridge 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@130 -- # return 0 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # _dev=0 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # dev_map=() 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@284 -- # iptr 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@542 -- # iptables-save 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@542 -- # iptables-restore 00:07:15.004 00:07:15.004 real 0m41.895s 00:07:15.004 user 1m4.554s 00:07:15.004 sys 0m10.293s 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:15.004 ************************************ 00:07:15.004 END TEST nvmf_lvs_grow 00:07:15.004 ************************************ 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@24 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:15.004 ************************************ 00:07:15.004 START TEST nvmf_bdev_io_wait 00:07:15.004 ************************************ 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:15.004 * Looking for test storage... 00:07:15.004 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:15.004 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:15.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.005 --rc genhtml_branch_coverage=1 00:07:15.005 --rc genhtml_function_coverage=1 00:07:15.005 --rc genhtml_legend=1 00:07:15.005 --rc geninfo_all_blocks=1 00:07:15.005 --rc geninfo_unexecuted_blocks=1 00:07:15.005 00:07:15.005 ' 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:15.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.005 --rc genhtml_branch_coverage=1 00:07:15.005 --rc genhtml_function_coverage=1 00:07:15.005 --rc genhtml_legend=1 00:07:15.005 --rc geninfo_all_blocks=1 00:07:15.005 --rc geninfo_unexecuted_blocks=1 00:07:15.005 00:07:15.005 ' 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:15.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.005 --rc genhtml_branch_coverage=1 00:07:15.005 --rc genhtml_function_coverage=1 00:07:15.005 --rc genhtml_legend=1 00:07:15.005 --rc geninfo_all_blocks=1 00:07:15.005 --rc geninfo_unexecuted_blocks=1 00:07:15.005 00:07:15.005 ' 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:15.005 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:15.005 --rc genhtml_branch_coverage=1 00:07:15.005 --rc genhtml_function_coverage=1 00:07:15.005 --rc genhtml_legend=1 00:07:15.005 --rc geninfo_all_blocks=1 00:07:15.005 --rc geninfo_unexecuted_blocks=1 00:07:15.005 00:07:15.005 ' 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@50 -- # : 0 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:07:15.005 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # have_pci_nics=0 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # prepare_net_devs 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # local -g is_hw=no 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # remove_target_ns 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:15.005 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:07:15.006 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:07:15.006 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:07:15.006 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # xtrace_disable 00:07:15.006 08:51:30 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:21.577 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:21.577 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # pci_devs=() 00:07:21.577 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # local -a pci_devs 00:07:21.577 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # pci_net_devs=() 00:07:21.577 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:07:21.577 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # pci_drivers=() 00:07:21.577 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # local -A pci_drivers 00:07:21.577 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # net_devs=() 00:07:21.577 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # local -ga net_devs 00:07:21.577 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # e810=() 00:07:21.577 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # local -ga e810 00:07:21.577 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # x722=() 00:07:21.577 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # local -ga x722 00:07:21.577 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # mlx=() 00:07:21.577 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # local -ga mlx 00:07:21.577 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:21.577 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:21.577 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:21.577 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:21.577 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:21.577 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:21.577 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:21.577 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:21.577 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:21.577 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:21.577 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:21.577 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:21.577 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:21.578 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:21.578 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:21.578 Found net devices under 0000:86:00.0: cvl_0_0 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:21.578 Found net devices under 0000:86:00.1: cvl_0_1 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # is_hw=yes 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@257 -- # create_target_ns 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@27 -- # local -gA dev_map 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@28 -- # local -g _dev 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # ips=() 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772161 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:07:21.578 10.0.0.1 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772162 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:07:21.578 10.0.0.2 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@38 -- # ping_ips 1 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=initiator0 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:07:21.578 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:07:21.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:21.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:07:21.579 00:07:21.579 --- 10.0.0.1 ping statistics --- 00:07:21.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.579 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target0 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target0 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:07:21.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:21.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:07:21.579 00:07:21.579 --- 10.0.0.2 ping statistics --- 00:07:21.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:21.579 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair++ )) 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # return 0 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=initiator0 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=initiator1 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # return 1 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev= 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@169 -- # return 0 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target0 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target0 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target1 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target1 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # return 1 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev= 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@169 -- # return 0 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # nvmfpid=2199609 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # waitforlisten 2199609 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2199609 ']' 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:21.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:21.579 08:51:36 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:21.579 [2024-11-20 08:51:36.931374] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:07:21.579 [2024-11-20 08:51:36.931420] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:21.579 [2024-11-20 08:51:37.010162] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:21.579 [2024-11-20 08:51:37.051329] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:21.579 [2024-11-20 08:51:37.051366] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:21.579 [2024-11-20 08:51:37.051373] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:21.579 [2024-11-20 08:51:37.051379] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:21.579 [2024-11-20 08:51:37.051384] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:21.579 [2024-11-20 08:51:37.052788] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:21.579 [2024-11-20 08:51:37.052898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:21.579 [2024-11-20 08:51:37.053003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:21.579 [2024-11-20 08:51:37.053003] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:21.839 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:21.839 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:07:21.839 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:07:21.839 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:21.839 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:21.839 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:21.839 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:21.839 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.839 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:21.839 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.839 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:21.839 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.839 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:21.839 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.839 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:21.839 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.839 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:21.839 [2024-11-20 08:51:37.870802] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:21.839 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:21.839 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:21.839 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:21.839 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:22.097 Malloc0 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:22.097 [2024-11-20 08:51:37.922143] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2199674 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2199676 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:07:22.097 { 00:07:22.097 "params": { 00:07:22.097 "name": "Nvme$subsystem", 00:07:22.097 "trtype": "$TEST_TRANSPORT", 00:07:22.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:22.097 "adrfam": "ipv4", 00:07:22.097 "trsvcid": "$NVMF_PORT", 00:07:22.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:22.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:22.097 "hdgst": ${hdgst:-false}, 00:07:22.097 "ddgst": ${ddgst:-false} 00:07:22.097 }, 00:07:22.097 "method": "bdev_nvme_attach_controller" 00:07:22.097 } 00:07:22.097 EOF 00:07:22.097 )") 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2199679 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:07:22.097 { 00:07:22.097 "params": { 00:07:22.097 "name": "Nvme$subsystem", 00:07:22.097 "trtype": "$TEST_TRANSPORT", 00:07:22.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:22.097 "adrfam": "ipv4", 00:07:22.097 "trsvcid": "$NVMF_PORT", 00:07:22.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:22.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:22.097 "hdgst": ${hdgst:-false}, 00:07:22.097 "ddgst": ${ddgst:-false} 00:07:22.097 }, 00:07:22.097 "method": "bdev_nvme_attach_controller" 00:07:22.097 } 00:07:22.097 EOF 00:07:22.097 )") 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2199682 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:07:22.097 { 00:07:22.097 "params": { 00:07:22.097 "name": "Nvme$subsystem", 00:07:22.097 "trtype": "$TEST_TRANSPORT", 00:07:22.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:22.097 "adrfam": "ipv4", 00:07:22.097 "trsvcid": "$NVMF_PORT", 00:07:22.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:22.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:22.097 "hdgst": ${hdgst:-false}, 00:07:22.097 "ddgst": ${ddgst:-false} 00:07:22.097 }, 00:07:22.097 "method": "bdev_nvme_attach_controller" 00:07:22.097 } 00:07:22.097 EOF 00:07:22.097 )") 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:07:22.097 { 00:07:22.097 "params": { 00:07:22.097 "name": "Nvme$subsystem", 00:07:22.097 "trtype": "$TEST_TRANSPORT", 00:07:22.097 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:22.097 "adrfam": "ipv4", 00:07:22.097 "trsvcid": "$NVMF_PORT", 00:07:22.097 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:22.097 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:22.097 "hdgst": ${hdgst:-false}, 00:07:22.097 "ddgst": ${ddgst:-false} 00:07:22.097 }, 00:07:22.097 "method": "bdev_nvme_attach_controller" 00:07:22.097 } 00:07:22.097 EOF 00:07:22.097 )") 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2199674 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:07:22.097 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:07:22.098 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:07:22.098 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:07:22.098 "params": { 00:07:22.098 "name": "Nvme1", 00:07:22.098 "trtype": "tcp", 00:07:22.098 "traddr": "10.0.0.2", 00:07:22.098 "adrfam": "ipv4", 00:07:22.098 "trsvcid": "4420", 00:07:22.098 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:22.098 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:22.098 "hdgst": false, 00:07:22.098 "ddgst": false 00:07:22.098 }, 00:07:22.098 "method": "bdev_nvme_attach_controller" 00:07:22.098 }' 00:07:22.098 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:07:22.098 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:07:22.098 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:07:22.098 "params": { 00:07:22.098 "name": "Nvme1", 00:07:22.098 "trtype": "tcp", 00:07:22.098 "traddr": "10.0.0.2", 00:07:22.098 "adrfam": "ipv4", 00:07:22.098 "trsvcid": "4420", 00:07:22.098 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:22.098 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:22.098 "hdgst": false, 00:07:22.098 "ddgst": false 00:07:22.098 }, 00:07:22.098 "method": "bdev_nvme_attach_controller" 00:07:22.098 }' 00:07:22.098 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:07:22.098 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:07:22.098 "params": { 00:07:22.098 "name": "Nvme1", 00:07:22.098 "trtype": "tcp", 00:07:22.098 "traddr": "10.0.0.2", 00:07:22.098 "adrfam": "ipv4", 00:07:22.098 "trsvcid": "4420", 00:07:22.098 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:22.098 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:22.098 "hdgst": false, 00:07:22.098 "ddgst": false 00:07:22.098 }, 00:07:22.098 "method": "bdev_nvme_attach_controller" 00:07:22.098 }' 00:07:22.098 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:07:22.098 08:51:37 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:07:22.098 "params": { 00:07:22.098 "name": "Nvme1", 00:07:22.098 "trtype": "tcp", 00:07:22.098 "traddr": "10.0.0.2", 00:07:22.098 "adrfam": "ipv4", 00:07:22.098 "trsvcid": "4420", 00:07:22.098 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:22.098 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:22.098 "hdgst": false, 00:07:22.098 "ddgst": false 00:07:22.098 }, 00:07:22.098 "method": "bdev_nvme_attach_controller" 00:07:22.098 }' 00:07:22.098 [2024-11-20 08:51:37.971629] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:07:22.098 [2024-11-20 08:51:37.971679] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:22.098 [2024-11-20 08:51:37.972919] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:07:22.098 [2024-11-20 08:51:37.972966] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:22.098 [2024-11-20 08:51:37.977510] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:07:22.098 [2024-11-20 08:51:37.977553] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:22.098 [2024-11-20 08:51:37.978144] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:07:22.098 [2024-11-20 08:51:37.978185] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:07:22.356 [2024-11-20 08:51:38.159315] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.356 [2024-11-20 08:51:38.202348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:22.356 [2024-11-20 08:51:38.255165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.356 [2024-11-20 08:51:38.298369] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:07:22.356 [2024-11-20 08:51:38.365558] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.613 [2024-11-20 08:51:38.413472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:22.613 [2024-11-20 08:51:38.422321] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:07:22.613 [2024-11-20 08:51:38.456684] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:07:22.613 Running I/O for 1 seconds... 00:07:22.613 Running I/O for 1 seconds... 00:07:22.613 Running I/O for 1 seconds... 00:07:22.613 Running I/O for 1 seconds... 00:07:23.544 13672.00 IOPS, 53.41 MiB/s 00:07:23.544 Latency(us) 00:07:23.544 [2024-11-20T07:51:39.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:23.544 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:23.544 Nvme1n1 : 1.01 13731.71 53.64 0.00 0.00 9293.51 4274.09 15614.66 00:07:23.544 [2024-11-20T07:51:39.585Z] =================================================================================================================== 00:07:23.544 [2024-11-20T07:51:39.585Z] Total : 13731.71 53.64 0.00 0.00 9293.51 4274.09 15614.66 00:07:23.544 6065.00 IOPS, 23.69 MiB/s 00:07:23.544 Latency(us) 00:07:23.544 [2024-11-20T07:51:39.585Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:23.544 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:23.544 Nvme1n1 : 1.02 6110.88 23.87 0.00 0.00 20797.89 7579.38 34648.60 00:07:23.544 [2024-11-20T07:51:39.585Z] =================================================================================================================== 00:07:23.544 [2024-11-20T07:51:39.585Z] Total : 6110.88 23.87 0.00 0.00 20797.89 7579.38 34648.60 00:07:23.802 246968.00 IOPS, 964.72 MiB/s 00:07:23.802 Latency(us) 00:07:23.802 [2024-11-20T07:51:39.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:23.802 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:23.802 Nvme1n1 : 1.00 246577.04 963.19 0.00 0.00 516.89 235.07 1552.92 00:07:23.802 [2024-11-20T07:51:39.843Z] =================================================================================================================== 00:07:23.802 [2024-11-20T07:51:39.843Z] Total : 246577.04 963.19 0.00 0.00 516.89 235.07 1552.92 00:07:23.802 6159.00 IOPS, 24.06 MiB/s 00:07:23.802 Latency(us) 00:07:23.802 [2024-11-20T07:51:39.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:23.802 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:23.802 Nvme1n1 : 1.01 6253.99 24.43 0.00 0.00 20404.29 4815.47 44450.50 00:07:23.802 [2024-11-20T07:51:39.843Z] =================================================================================================================== 00:07:23.802 [2024-11-20T07:51:39.843Z] Total : 6253.99 24.43 0.00 0.00 20404.29 4815.47 44450.50 00:07:23.802 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2199676 00:07:23.802 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2199679 00:07:23.802 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2199682 00:07:23.802 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:23.802 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:23.802 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:23.802 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:23.802 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:23.802 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:23.802 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # nvmfcleanup 00:07:23.802 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@99 -- # sync 00:07:23.802 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:07:23.802 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # set +e 00:07:23.802 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # for i in {1..20} 00:07:23.802 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:07:23.802 rmmod nvme_tcp 00:07:23.802 rmmod nvme_fabrics 00:07:23.802 rmmod nvme_keyring 00:07:23.802 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:07:23.802 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # set -e 00:07:23.802 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # return 0 00:07:23.802 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # '[' -n 2199609 ']' 00:07:23.802 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@337 -- # killprocess 2199609 00:07:23.802 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2199609 ']' 00:07:23.802 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2199609 00:07:23.802 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:07:23.802 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:23.802 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2199609 00:07:24.061 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:24.061 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:24.061 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2199609' 00:07:24.061 killing process with pid 2199609 00:07:24.061 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2199609 00:07:24.061 08:51:39 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2199609 00:07:24.061 08:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:07:24.061 08:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # nvmf_fini 00:07:24.061 08:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@264 -- # local dev 00:07:24.061 08:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@267 -- # remove_target_ns 00:07:24.061 08:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:24.061 08:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:24.061 08:51:40 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:07:26.599 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@268 -- # delete_main_bridge 00:07:26.599 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:07:26.599 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@130 -- # return 0 00:07:26.599 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:07:26.599 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:07:26.599 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:07:26.599 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:07:26.599 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:07:26.599 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:07:26.599 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:07:26.599 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:07:26.599 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:07:26.599 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # _dev=0 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # dev_map=() 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@284 -- # iptr 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@542 -- # iptables-save 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@542 -- # iptables-restore 00:07:26.600 00:07:26.600 real 0m11.542s 00:07:26.600 user 0m18.755s 00:07:26.600 sys 0m6.280s 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:26.600 ************************************ 00:07:26.600 END TEST nvmf_bdev_io_wait 00:07:26.600 ************************************ 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@25 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:26.600 ************************************ 00:07:26.600 START TEST nvmf_queue_depth 00:07:26.600 ************************************ 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:26.600 * Looking for test storage... 00:07:26.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:26.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.600 --rc genhtml_branch_coverage=1 00:07:26.600 --rc genhtml_function_coverage=1 00:07:26.600 --rc genhtml_legend=1 00:07:26.600 --rc geninfo_all_blocks=1 00:07:26.600 --rc geninfo_unexecuted_blocks=1 00:07:26.600 00:07:26.600 ' 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:26.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.600 --rc genhtml_branch_coverage=1 00:07:26.600 --rc genhtml_function_coverage=1 00:07:26.600 --rc genhtml_legend=1 00:07:26.600 --rc geninfo_all_blocks=1 00:07:26.600 --rc geninfo_unexecuted_blocks=1 00:07:26.600 00:07:26.600 ' 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:26.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.600 --rc genhtml_branch_coverage=1 00:07:26.600 --rc genhtml_function_coverage=1 00:07:26.600 --rc genhtml_legend=1 00:07:26.600 --rc geninfo_all_blocks=1 00:07:26.600 --rc geninfo_unexecuted_blocks=1 00:07:26.600 00:07:26.600 ' 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:26.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:26.600 --rc genhtml_branch_coverage=1 00:07:26.600 --rc genhtml_function_coverage=1 00:07:26.600 --rc genhtml_legend=1 00:07:26.600 --rc geninfo_all_blocks=1 00:07:26.600 --rc geninfo_unexecuted_blocks=1 00:07:26.600 00:07:26.600 ' 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:26.600 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@50 -- # : 0 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:07:26.601 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@54 -- # have_pci_nics=0 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # prepare_net_devs 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # local -g is_hw=no 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # remove_target_ns 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # xtrace_disable 00:07:26.601 08:51:42 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@131 -- # pci_devs=() 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@131 -- # local -a pci_devs 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@132 -- # pci_net_devs=() 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@133 -- # pci_drivers=() 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@133 -- # local -A pci_drivers 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@135 -- # net_devs=() 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@135 -- # local -ga net_devs 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@136 -- # e810=() 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@136 -- # local -ga e810 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@137 -- # x722=() 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@137 -- # local -ga x722 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@138 -- # mlx=() 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@138 -- # local -ga mlx 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:33.176 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:33.176 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:33.176 Found net devices under 0000:86:00.0: cvl_0_0 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:33.176 Found net devices under 0000:86:00.1: cvl_0_1 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # is_hw=yes 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@257 -- # create_target_ns 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@27 -- # local -gA dev_map 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@28 -- # local -g _dev 00:07:33.176 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # ips=() 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772161 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:07:33.177 10.0.0.1 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772162 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:07:33.177 10.0.0.2 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@38 -- # ping_ips 1 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=initiator0 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:07:33.177 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:33.177 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.406 ms 00:07:33.177 00:07:33.177 --- 10.0.0.1 ping statistics --- 00:07:33.177 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.177 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target0 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target0 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:07:33.177 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:07:33.178 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:33.178 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:07:33.178 00:07:33.178 --- 10.0.0.2 ping statistics --- 00:07:33.178 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:33.178 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair++ )) 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # return 0 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=initiator0 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=initiator1 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # return 1 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev= 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@169 -- # return 0 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target0 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target0 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target1 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target1 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # return 1 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev= 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@169 -- # return 0 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # nvmfpid=2203703 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # waitforlisten 2203703 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2203703 ']' 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:33.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.178 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:33.178 [2024-11-20 08:51:48.566078] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:07:33.178 [2024-11-20 08:51:48.566125] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:33.178 [2024-11-20 08:51:48.646182] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.178 [2024-11-20 08:51:48.687587] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:33.179 [2024-11-20 08:51:48.687624] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:33.179 [2024-11-20 08:51:48.687631] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:33.179 [2024-11-20 08:51:48.687637] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:33.179 [2024-11-20 08:51:48.687642] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:33.179 [2024-11-20 08:51:48.688183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:33.179 [2024-11-20 08:51:48.827374] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:33.179 Malloc0 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:33.179 [2024-11-20 08:51:48.877660] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2203728 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2203728 /var/tmp/bdevperf.sock 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2203728 ']' 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:33.179 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:33.179 08:51:48 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:33.179 [2024-11-20 08:51:48.926446] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:07:33.179 [2024-11-20 08:51:48.926486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2203728 ] 00:07:33.179 [2024-11-20 08:51:49.000270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:33.179 [2024-11-20 08:51:49.041328] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:33.179 08:51:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:33.179 08:51:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:07:33.179 08:51:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:07:33.179 08:51:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:33.179 08:51:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:33.437 NVMe0n1 00:07:33.437 08:51:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:33.437 08:51:49 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:33.437 Running I/O for 10 seconds... 00:07:35.748 11708.00 IOPS, 45.73 MiB/s [2024-11-20T07:51:52.724Z] 11951.50 IOPS, 46.69 MiB/s [2024-11-20T07:51:53.660Z] 12056.33 IOPS, 47.10 MiB/s [2024-11-20T07:51:54.594Z] 12098.75 IOPS, 47.26 MiB/s [2024-11-20T07:51:55.528Z] 12153.80 IOPS, 47.48 MiB/s [2024-11-20T07:51:56.464Z] 12152.50 IOPS, 47.47 MiB/s [2024-11-20T07:51:57.398Z] 12202.57 IOPS, 47.67 MiB/s [2024-11-20T07:51:58.773Z] 12178.62 IOPS, 47.57 MiB/s [2024-11-20T07:51:59.707Z] 12190.33 IOPS, 47.62 MiB/s [2024-11-20T07:51:59.707Z] 12222.70 IOPS, 47.74 MiB/s 00:07:43.666 Latency(us) 00:07:43.666 [2024-11-20T07:51:59.707Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:43.666 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:07:43.666 Verification LBA range: start 0x0 length 0x4000 00:07:43.666 NVMe0n1 : 10.06 12247.00 47.84 0.00 0.00 83309.71 18350.08 53568.56 00:07:43.666 [2024-11-20T07:51:59.707Z] =================================================================================================================== 00:07:43.666 [2024-11-20T07:51:59.707Z] Total : 12247.00 47.84 0.00 0.00 83309.71 18350.08 53568.56 00:07:43.666 { 00:07:43.666 "results": [ 00:07:43.666 { 00:07:43.666 "job": "NVMe0n1", 00:07:43.666 "core_mask": "0x1", 00:07:43.666 "workload": "verify", 00:07:43.666 "status": "finished", 00:07:43.666 "verify_range": { 00:07:43.666 "start": 0, 00:07:43.666 "length": 16384 00:07:43.666 }, 00:07:43.666 "queue_depth": 1024, 00:07:43.666 "io_size": 4096, 00:07:43.666 "runtime": 10.056589, 00:07:43.666 "iops": 12246.995477293543, 00:07:43.666 "mibps": 47.839826083177904, 00:07:43.666 "io_failed": 0, 00:07:43.666 "io_timeout": 0, 00:07:43.666 "avg_latency_us": 83309.70632165433, 00:07:43.666 "min_latency_us": 18350.08, 00:07:43.666 "max_latency_us": 53568.556521739134 00:07:43.666 } 00:07:43.666 ], 00:07:43.666 "core_count": 1 00:07:43.666 } 00:07:43.666 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2203728 00:07:43.666 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2203728 ']' 00:07:43.666 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2203728 00:07:43.666 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:43.666 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.666 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2203728 00:07:43.666 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:43.666 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:43.666 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2203728' 00:07:43.666 killing process with pid 2203728 00:07:43.666 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2203728 00:07:43.666 Received shutdown signal, test time was about 10.000000 seconds 00:07:43.666 00:07:43.666 Latency(us) 00:07:43.666 [2024-11-20T07:51:59.707Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:43.666 [2024-11-20T07:51:59.707Z] =================================================================================================================== 00:07:43.666 [2024-11-20T07:51:59.707Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:43.666 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2203728 00:07:43.666 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:07:43.666 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:07:43.666 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # nvmfcleanup 00:07:43.666 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@99 -- # sync 00:07:43.666 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:07:43.666 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # set +e 00:07:43.666 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # for i in {1..20} 00:07:43.666 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:07:43.666 rmmod nvme_tcp 00:07:43.666 rmmod nvme_fabrics 00:07:43.666 rmmod nvme_keyring 00:07:43.926 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:07:43.926 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # set -e 00:07:43.926 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # return 0 00:07:43.926 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # '[' -n 2203703 ']' 00:07:43.926 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@337 -- # killprocess 2203703 00:07:43.926 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2203703 ']' 00:07:43.926 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2203703 00:07:43.926 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:07:43.926 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:43.926 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2203703 00:07:43.926 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:43.926 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:43.926 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2203703' 00:07:43.926 killing process with pid 2203703 00:07:43.926 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2203703 00:07:43.926 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2203703 00:07:43.926 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:07:43.926 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # nvmf_fini 00:07:43.926 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@264 -- # local dev 00:07:43.926 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@267 -- # remove_target_ns 00:07:43.926 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:43.926 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:43.926 08:51:59 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@268 -- # delete_main_bridge 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@130 -- # return 0 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@41 -- # _dev=0 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@41 -- # dev_map=() 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@284 -- # iptr 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@542 -- # iptables-save 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@542 -- # iptables-restore 00:07:46.461 00:07:46.461 real 0m19.863s 00:07:46.461 user 0m23.134s 00:07:46.461 sys 0m6.136s 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:07:46.461 ************************************ 00:07:46.461 END TEST nvmf_queue_depth 00:07:46.461 ************************************ 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:46.461 ************************************ 00:07:46.461 START TEST nvmf_nmic 00:07:46.461 ************************************ 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:07:46.461 * Looking for test storage... 00:07:46.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:46.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.461 --rc genhtml_branch_coverage=1 00:07:46.461 --rc genhtml_function_coverage=1 00:07:46.461 --rc genhtml_legend=1 00:07:46.461 --rc geninfo_all_blocks=1 00:07:46.461 --rc geninfo_unexecuted_blocks=1 00:07:46.461 00:07:46.461 ' 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:46.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.461 --rc genhtml_branch_coverage=1 00:07:46.461 --rc genhtml_function_coverage=1 00:07:46.461 --rc genhtml_legend=1 00:07:46.461 --rc geninfo_all_blocks=1 00:07:46.461 --rc geninfo_unexecuted_blocks=1 00:07:46.461 00:07:46.461 ' 00:07:46.461 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:46.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.461 --rc genhtml_branch_coverage=1 00:07:46.461 --rc genhtml_function_coverage=1 00:07:46.461 --rc genhtml_legend=1 00:07:46.461 --rc geninfo_all_blocks=1 00:07:46.461 --rc geninfo_unexecuted_blocks=1 00:07:46.461 00:07:46.461 ' 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:46.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:46.462 --rc genhtml_branch_coverage=1 00:07:46.462 --rc genhtml_function_coverage=1 00:07:46.462 --rc genhtml_legend=1 00:07:46.462 --rc geninfo_all_blocks=1 00:07:46.462 --rc geninfo_unexecuted_blocks=1 00:07:46.462 00:07:46.462 ' 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@50 -- # : 0 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:07:46.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@54 -- # have_pci_nics=0 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # prepare_net_devs 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # local -g is_hw=no 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # remove_target_ns 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # xtrace_disable 00:07:46.462 08:52:02 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:53.196 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:53.196 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@131 -- # pci_devs=() 00:07:53.196 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@131 -- # local -a pci_devs 00:07:53.196 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@132 -- # pci_net_devs=() 00:07:53.196 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:07:53.196 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@133 -- # pci_drivers=() 00:07:53.196 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@133 -- # local -A pci_drivers 00:07:53.196 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@135 -- # net_devs=() 00:07:53.196 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@135 -- # local -ga net_devs 00:07:53.196 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@136 -- # e810=() 00:07:53.196 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@136 -- # local -ga e810 00:07:53.196 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@137 -- # x722=() 00:07:53.196 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@137 -- # local -ga x722 00:07:53.196 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@138 -- # mlx=() 00:07:53.196 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@138 -- # local -ga mlx 00:07:53.196 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:53.196 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:53.196 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:53.196 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:53.196 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:53.196 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:53.196 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:53.197 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:53.197 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:53.197 Found net devices under 0000:86:00.0: cvl_0_0 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:53.197 Found net devices under 0000:86:00.1: cvl_0_1 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # is_hw=yes 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@257 -- # create_target_ns 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@27 -- # local -gA dev_map 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@28 -- # local -g _dev 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # ips=() 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772161 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:07:53.197 10.0.0.1 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:07:53.197 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772162 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:07:53.198 10.0.0.2 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@38 -- # ping_ips 1 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=initiator0 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:07:53.198 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:53.198 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.357 ms 00:07:53.198 00:07:53.198 --- 10.0.0.1 ping statistics --- 00:07:53.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.198 rtt min/avg/max/mdev = 0.357/0.357/0.357/0.000 ms 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target0 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target0 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:07:53.198 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:53.198 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:07:53.198 00:07:53.198 --- 10.0.0.2 ping statistics --- 00:07:53.198 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:53.198 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair++ )) 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:53.198 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # return 0 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=initiator0 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=initiator1 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # return 1 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev= 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@169 -- # return 0 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target0 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target0 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target1 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target1 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # return 1 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev= 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@169 -- # return 0 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # nvmfpid=2209143 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # waitforlisten 2209143 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2209143 ']' 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:53.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:53.199 08:52:08 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:53.199 [2024-11-20 08:52:08.515990] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:07:53.199 [2024-11-20 08:52:08.516038] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:53.199 [2024-11-20 08:52:08.596783] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:53.199 [2024-11-20 08:52:08.638669] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:53.199 [2024-11-20 08:52:08.638710] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:53.199 [2024-11-20 08:52:08.638717] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:53.199 [2024-11-20 08:52:08.638722] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:53.199 [2024-11-20 08:52:08.638727] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:53.199 [2024-11-20 08:52:08.640188] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:53.199 [2024-11-20 08:52:08.640295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:53.199 [2024-11-20 08:52:08.640405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.199 [2024-11-20 08:52:08.640406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:53.456 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:53.456 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:07:53.456 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:07:53.456 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:53.457 [2024-11-20 08:52:09.399289] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:53.457 Malloc0 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:53.457 [2024-11-20 08:52:09.467940] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:07:53.457 test case1: single bdev can't be used in multiple subsystems 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.457 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:53.457 [2024-11-20 08:52:09.491835] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:07:53.457 [2024-11-20 08:52:09.491854] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:07:53.457 [2024-11-20 08:52:09.491862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:07:53.457 request: 00:07:53.457 { 00:07:53.714 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:07:53.714 "namespace": { 00:07:53.714 "bdev_name": "Malloc0", 00:07:53.714 "no_auto_visible": false 00:07:53.714 }, 00:07:53.714 "method": "nvmf_subsystem_add_ns", 00:07:53.714 "req_id": 1 00:07:53.714 } 00:07:53.714 Got JSON-RPC error response 00:07:53.714 response: 00:07:53.714 { 00:07:53.714 "code": -32602, 00:07:53.714 "message": "Invalid parameters" 00:07:53.714 } 00:07:53.714 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:07:53.714 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:07:53.714 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:07:53.714 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:07:53.714 Adding namespace failed - expected result. 00:07:53.714 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:07:53.714 test case2: host connect to nvmf target in multiple paths 00:07:53.714 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:07:53.714 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:53.714 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:07:53.714 [2024-11-20 08:52:09.503992] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:07:53.714 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:53.714 08:52:09 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:07:54.644 08:52:10 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:07:56.015 08:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:07:56.015 08:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:07:56.015 08:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:07:56.015 08:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:07:56.015 08:52:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:07:57.910 08:52:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:07:57.910 08:52:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:07:57.910 08:52:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:07:57.910 08:52:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:07:57.910 08:52:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:07:57.910 08:52:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:07:57.910 08:52:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:07:57.910 [global] 00:07:57.910 thread=1 00:07:57.910 invalidate=1 00:07:57.910 rw=write 00:07:57.910 time_based=1 00:07:57.910 runtime=1 00:07:57.910 ioengine=libaio 00:07:57.910 direct=1 00:07:57.910 bs=4096 00:07:57.910 iodepth=1 00:07:57.910 norandommap=0 00:07:57.910 numjobs=1 00:07:57.910 00:07:57.910 verify_dump=1 00:07:57.910 verify_backlog=512 00:07:57.910 verify_state_save=0 00:07:57.910 do_verify=1 00:07:57.910 verify=crc32c-intel 00:07:57.910 [job0] 00:07:57.910 filename=/dev/nvme0n1 00:07:57.910 Could not set queue depth (nvme0n1) 00:07:58.168 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:07:58.168 fio-3.35 00:07:58.168 Starting 1 thread 00:07:59.538 00:07:59.538 job0: (groupid=0, jobs=1): err= 0: pid=2210222: Wed Nov 20 08:52:15 2024 00:07:59.538 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(9.95MiB/1001msec) 00:07:59.539 slat (nsec): min=6189, max=30801, avg=7136.52, stdev=1035.31 00:07:59.539 clat (usec): min=163, max=531, avg=229.48, stdev=21.89 00:07:59.539 lat (usec): min=170, max=562, avg=236.62, stdev=22.03 00:07:59.539 clat percentiles (usec): 00:07:59.539 | 1.00th=[ 182], 5.00th=[ 188], 10.00th=[ 198], 20.00th=[ 221], 00:07:59.539 | 30.00th=[ 225], 40.00th=[ 229], 50.00th=[ 231], 60.00th=[ 233], 00:07:59.539 | 70.00th=[ 237], 80.00th=[ 241], 90.00th=[ 249], 95.00th=[ 273], 00:07:59.539 | 99.00th=[ 285], 99.50th=[ 289], 99.90th=[ 310], 99.95th=[ 474], 00:07:59.539 | 99.99th=[ 529] 00:07:59.539 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:07:59.539 slat (nsec): min=9169, max=39534, avg=10097.83, stdev=1240.54 00:07:59.539 clat (usec): min=106, max=376, avg=141.12, stdev=22.02 00:07:59.539 lat (usec): min=116, max=416, avg=151.22, stdev=22.14 00:07:59.539 clat percentiles (usec): 00:07:59.539 | 1.00th=[ 119], 5.00th=[ 123], 10.00th=[ 125], 20.00th=[ 128], 00:07:59.539 | 30.00th=[ 130], 40.00th=[ 131], 50.00th=[ 133], 60.00th=[ 137], 00:07:59.539 | 70.00th=[ 139], 80.00th=[ 149], 90.00th=[ 180], 95.00th=[ 186], 00:07:59.539 | 99.00th=[ 198], 99.50th=[ 231], 99.90th=[ 273], 99.95th=[ 281], 00:07:59.539 | 99.99th=[ 375] 00:07:59.539 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:07:59.539 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:07:59.539 lat (usec) : 250=95.07%, 500=4.91%, 750=0.02% 00:07:59.539 cpu : usr=3.00%, sys=3.90%, ctx=5107, majf=0, minf=1 00:07:59.539 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:07:59.539 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:59.539 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:07:59.539 issued rwts: total=2547,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:07:59.539 latency : target=0, window=0, percentile=100.00%, depth=1 00:07:59.539 00:07:59.539 Run status group 0 (all jobs): 00:07:59.539 READ: bw=9.94MiB/s (10.4MB/s), 9.94MiB/s-9.94MiB/s (10.4MB/s-10.4MB/s), io=9.95MiB (10.4MB), run=1001-1001msec 00:07:59.539 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:07:59.539 00:07:59.539 Disk stats (read/write): 00:07:59.539 nvme0n1: ios=2163/2560, merge=0/0, ticks=493/350, in_queue=843, util=91.38% 00:07:59.539 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:07:59.539 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:07:59.539 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:07:59.539 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:07:59.539 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:07:59.539 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:59.539 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:07:59.539 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:07:59.539 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:07:59.539 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:07:59.539 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:07:59.539 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # nvmfcleanup 00:07:59.539 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@99 -- # sync 00:07:59.539 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:07:59.539 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # set +e 00:07:59.539 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # for i in {1..20} 00:07:59.539 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:07:59.539 rmmod nvme_tcp 00:07:59.539 rmmod nvme_fabrics 00:07:59.539 rmmod nvme_keyring 00:07:59.539 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:07:59.539 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # set -e 00:07:59.539 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # return 0 00:07:59.539 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # '[' -n 2209143 ']' 00:07:59.539 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@337 -- # killprocess 2209143 00:07:59.539 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2209143 ']' 00:07:59.539 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2209143 00:07:59.539 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:07:59.539 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:59.539 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2209143 00:07:59.539 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:59.539 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:59.539 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2209143' 00:07:59.539 killing process with pid 2209143 00:07:59.539 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2209143 00:07:59.539 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2209143 00:07:59.798 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:07:59.798 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # nvmf_fini 00:07:59.798 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@264 -- # local dev 00:07:59.798 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@267 -- # remove_target_ns 00:07:59.798 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:59.798 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:59.798 08:52:15 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@268 -- # delete_main_bridge 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@130 -- # return 0 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@41 -- # _dev=0 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@41 -- # dev_map=() 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@284 -- # iptr 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@542 -- # iptables-save 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@542 -- # iptables-restore 00:08:02.335 00:08:02.335 real 0m15.692s 00:08:02.335 user 0m35.532s 00:08:02.335 sys 0m5.536s 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:02.335 ************************************ 00:08:02.335 END TEST nvmf_nmic 00:08:02.335 ************************************ 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:02.335 ************************************ 00:08:02.335 START TEST nvmf_fio_target 00:08:02.335 ************************************ 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:02.335 * Looking for test storage... 00:08:02.335 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:08:02.335 08:52:17 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:02.335 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:02.335 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:02.335 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:02.335 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:02.335 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:02.335 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:02.335 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:02.335 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:02.335 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:02.335 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:02.335 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:02.335 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:02.335 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:02.335 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:02.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.336 --rc genhtml_branch_coverage=1 00:08:02.336 --rc genhtml_function_coverage=1 00:08:02.336 --rc genhtml_legend=1 00:08:02.336 --rc geninfo_all_blocks=1 00:08:02.336 --rc geninfo_unexecuted_blocks=1 00:08:02.336 00:08:02.336 ' 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:02.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.336 --rc genhtml_branch_coverage=1 00:08:02.336 --rc genhtml_function_coverage=1 00:08:02.336 --rc genhtml_legend=1 00:08:02.336 --rc geninfo_all_blocks=1 00:08:02.336 --rc geninfo_unexecuted_blocks=1 00:08:02.336 00:08:02.336 ' 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:02.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.336 --rc genhtml_branch_coverage=1 00:08:02.336 --rc genhtml_function_coverage=1 00:08:02.336 --rc genhtml_legend=1 00:08:02.336 --rc geninfo_all_blocks=1 00:08:02.336 --rc geninfo_unexecuted_blocks=1 00:08:02.336 00:08:02.336 ' 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:02.336 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:02.336 --rc genhtml_branch_coverage=1 00:08:02.336 --rc genhtml_function_coverage=1 00:08:02.336 --rc genhtml_legend=1 00:08:02.336 --rc geninfo_all_blocks=1 00:08:02.336 --rc geninfo_unexecuted_blocks=1 00:08:02.336 00:08:02.336 ' 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@50 -- # : 0 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:08:02.336 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # remove_target_ns 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # xtrace_disable 00:08:02.336 08:52:18 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:08.919 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:08.919 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@131 -- # pci_devs=() 00:08:08.919 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:08:08.919 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:08:08.919 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:08:08.919 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:08:08.919 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:08:08.919 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@135 -- # net_devs=() 00:08:08.919 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:08:08.919 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@136 -- # e810=() 00:08:08.919 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@136 -- # local -ga e810 00:08:08.919 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@137 -- # x722=() 00:08:08.919 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@137 -- # local -ga x722 00:08:08.919 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@138 -- # mlx=() 00:08:08.919 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@138 -- # local -ga mlx 00:08:08.919 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:08.919 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:08.919 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:08.919 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:08.919 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:08.919 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:08.919 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:08.919 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:08.920 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:08.920 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:08.920 Found net devices under 0000:86:00.0: cvl_0_0 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:08.920 Found net devices under 0000:86:00.1: cvl_0_1 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # is_hw=yes 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@257 -- # create_target_ns 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@28 -- # local -g _dev 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # ips=() 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:08:08.920 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772161 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:08:08.921 10.0.0.1 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772162 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:08:08.921 10.0.0.2 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:08:08.921 08:52:23 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:08:08.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:08.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.496 ms 00:08:08.921 00:08:08.921 --- 10.0.0.1 ping statistics --- 00:08:08.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.921 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target0 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:08:08.921 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:08:08.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:08.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:08:08.922 00:08:08.922 --- 10.0.0.2 ping statistics --- 00:08:08.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:08.922 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair++ )) 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # return 0 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=initiator1 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # return 1 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev= 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@169 -- # return 0 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target0 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target1 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target1 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # return 1 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev= 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@169 -- # return 0 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # nvmfpid=2214014 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # waitforlisten 2214014 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2214014 ']' 00:08:08.922 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:08.923 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:08.923 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:08.923 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:08.923 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:08.923 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:08.923 [2024-11-20 08:52:24.234879] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:08:08.923 [2024-11-20 08:52:24.234934] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:08.923 [2024-11-20 08:52:24.313764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:08.923 [2024-11-20 08:52:24.356503] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:08.923 [2024-11-20 08:52:24.356541] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:08.923 [2024-11-20 08:52:24.356548] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:08.923 [2024-11-20 08:52:24.356554] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:08.923 [2024-11-20 08:52:24.356560] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:08.923 [2024-11-20 08:52:24.358150] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:08.923 [2024-11-20 08:52:24.358264] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:08.923 [2024-11-20 08:52:24.358401] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:08.923 [2024-11-20 08:52:24.358402] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:08.923 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:08.923 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:08:08.923 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:08:08.923 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:08.923 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:08.923 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:08.923 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:08.923 [2024-11-20 08:52:24.663177] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:08.923 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:08.923 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:08:08.923 08:52:24 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:09.180 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:08:09.180 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:09.437 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:08:09.437 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:09.694 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:08:09.694 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:08:09.951 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:09.951 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:08:09.951 08:52:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:10.209 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:08:10.209 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:10.466 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:08:10.466 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:08:10.723 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:10.980 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:10.980 08:52:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:10.980 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:08:10.980 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:08:11.237 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:11.494 [2024-11-20 08:52:27.406075] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:11.494 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:08:11.751 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:08:12.008 08:52:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:13.379 08:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:08:13.379 08:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:08:13.379 08:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:13.379 08:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:08:13.379 08:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:08:13.379 08:52:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:08:15.274 08:52:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:15.274 08:52:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:15.274 08:52:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:15.274 08:52:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:08:15.274 08:52:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:15.274 08:52:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:08:15.274 08:52:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:15.274 [global] 00:08:15.274 thread=1 00:08:15.274 invalidate=1 00:08:15.274 rw=write 00:08:15.274 time_based=1 00:08:15.274 runtime=1 00:08:15.274 ioengine=libaio 00:08:15.274 direct=1 00:08:15.274 bs=4096 00:08:15.274 iodepth=1 00:08:15.274 norandommap=0 00:08:15.274 numjobs=1 00:08:15.274 00:08:15.274 verify_dump=1 00:08:15.274 verify_backlog=512 00:08:15.274 verify_state_save=0 00:08:15.274 do_verify=1 00:08:15.274 verify=crc32c-intel 00:08:15.274 [job0] 00:08:15.274 filename=/dev/nvme0n1 00:08:15.274 [job1] 00:08:15.274 filename=/dev/nvme0n2 00:08:15.274 [job2] 00:08:15.274 filename=/dev/nvme0n3 00:08:15.274 [job3] 00:08:15.274 filename=/dev/nvme0n4 00:08:15.274 Could not set queue depth (nvme0n1) 00:08:15.274 Could not set queue depth (nvme0n2) 00:08:15.274 Could not set queue depth (nvme0n3) 00:08:15.274 Could not set queue depth (nvme0n4) 00:08:15.531 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:15.531 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:15.531 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:15.531 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:15.531 fio-3.35 00:08:15.531 Starting 4 threads 00:08:16.902 00:08:16.902 job0: (groupid=0, jobs=1): err= 0: pid=2215499: Wed Nov 20 08:52:32 2024 00:08:16.902 read: IOPS=2537, BW=9.91MiB/s (10.4MB/s)(9.92MiB/1001msec) 00:08:16.902 slat (nsec): min=6490, max=27456, avg=7452.87, stdev=1130.69 00:08:16.902 clat (usec): min=158, max=513, avg=215.03, stdev=36.08 00:08:16.902 lat (usec): min=165, max=521, avg=222.48, stdev=36.26 00:08:16.902 clat percentiles (usec): 00:08:16.902 | 1.00th=[ 163], 5.00th=[ 169], 10.00th=[ 176], 20.00th=[ 184], 00:08:16.902 | 30.00th=[ 188], 40.00th=[ 200], 50.00th=[ 215], 60.00th=[ 225], 00:08:16.902 | 70.00th=[ 233], 80.00th=[ 243], 90.00th=[ 255], 95.00th=[ 269], 00:08:16.902 | 99.00th=[ 322], 99.50th=[ 371], 99.90th=[ 441], 99.95th=[ 478], 00:08:16.902 | 99.99th=[ 515] 00:08:16.902 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:08:16.902 slat (nsec): min=7004, max=37180, avg=10807.97, stdev=1518.29 00:08:16.902 clat (usec): min=107, max=386, avg=154.78, stdev=36.30 00:08:16.902 lat (usec): min=122, max=416, avg=165.59, stdev=36.54 00:08:16.902 clat percentiles (usec): 00:08:16.902 | 1.00th=[ 117], 5.00th=[ 122], 10.00th=[ 125], 20.00th=[ 128], 00:08:16.902 | 30.00th=[ 131], 40.00th=[ 135], 50.00th=[ 139], 60.00th=[ 149], 00:08:16.902 | 70.00th=[ 161], 80.00th=[ 186], 90.00th=[ 219], 95.00th=[ 235], 00:08:16.902 | 99.00th=[ 251], 99.50th=[ 260], 99.90th=[ 359], 99.95th=[ 383], 00:08:16.902 | 99.99th=[ 388] 00:08:16.902 bw ( KiB/s): min=11096, max=11096, per=56.18%, avg=11096.00, stdev= 0.00, samples=1 00:08:16.902 iops : min= 2774, max= 2774, avg=2774.00, stdev= 0.00, samples=1 00:08:16.902 lat (usec) : 250=93.00%, 500=6.98%, 750=0.02% 00:08:16.902 cpu : usr=3.00%, sys=4.30%, ctx=5101, majf=0, minf=1 00:08:16.902 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:16.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:16.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:16.902 issued rwts: total=2540,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:16.902 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:16.902 job1: (groupid=0, jobs=1): err= 0: pid=2215525: Wed Nov 20 08:52:32 2024 00:08:16.902 read: IOPS=1015, BW=4063KiB/s (4160kB/s)(4144KiB/1020msec) 00:08:16.902 slat (nsec): min=6551, max=23181, avg=7670.44, stdev=1761.07 00:08:16.902 clat (usec): min=191, max=42011, avg=677.80, stdev=4227.21 00:08:16.902 lat (usec): min=199, max=42034, avg=685.47, stdev=4228.53 00:08:16.902 clat percentiles (usec): 00:08:16.902 | 1.00th=[ 202], 5.00th=[ 212], 10.00th=[ 219], 20.00th=[ 227], 00:08:16.902 | 30.00th=[ 231], 40.00th=[ 235], 50.00th=[ 239], 60.00th=[ 243], 00:08:16.902 | 70.00th=[ 247], 80.00th=[ 253], 90.00th=[ 260], 95.00th=[ 269], 00:08:16.902 | 99.00th=[40633], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:08:16.902 | 99.99th=[42206] 00:08:16.902 write: IOPS=1505, BW=6024KiB/s (6168kB/s)(6144KiB/1020msec); 0 zone resets 00:08:16.902 slat (nsec): min=9712, max=45952, avg=10957.54, stdev=1293.40 00:08:16.902 clat (usec): min=107, max=393, avg=186.98, stdev=48.43 00:08:16.902 lat (usec): min=117, max=410, avg=197.93, stdev=48.67 00:08:16.902 clat percentiles (usec): 00:08:16.902 | 1.00th=[ 118], 5.00th=[ 124], 10.00th=[ 131], 20.00th=[ 139], 00:08:16.902 | 30.00th=[ 147], 40.00th=[ 163], 50.00th=[ 182], 60.00th=[ 196], 00:08:16.902 | 70.00th=[ 221], 80.00th=[ 235], 90.00th=[ 258], 95.00th=[ 273], 00:08:16.902 | 99.00th=[ 289], 99.50th=[ 297], 99.90th=[ 367], 99.95th=[ 396], 00:08:16.902 | 99.99th=[ 396] 00:08:16.902 bw ( KiB/s): min= 3240, max= 9048, per=31.11%, avg=6144.00, stdev=4106.88, samples=2 00:08:16.902 iops : min= 810, max= 2262, avg=1536.00, stdev=1026.72, samples=2 00:08:16.902 lat (usec) : 250=83.20%, 500=16.33%, 750=0.04% 00:08:16.902 lat (msec) : 50=0.43% 00:08:16.902 cpu : usr=1.47%, sys=2.16%, ctx=2573, majf=0, minf=1 00:08:16.902 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:16.902 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:16.902 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:16.902 issued rwts: total=1036,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:16.902 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:16.902 job2: (groupid=0, jobs=1): err= 0: pid=2215559: Wed Nov 20 08:52:32 2024 00:08:16.902 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:08:16.902 slat (nsec): min=8132, max=23062, avg=15755.09, stdev=6700.45 00:08:16.902 clat (usec): min=40812, max=42393, avg=41325.19, stdev=521.27 00:08:16.902 lat (usec): min=40834, max=42403, avg=41340.95, stdev=519.65 00:08:16.902 clat percentiles (usec): 00:08:16.902 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:08:16.902 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:16.902 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:08:16.902 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:16.902 | 99.99th=[42206] 00:08:16.902 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:08:16.902 slat (nsec): min=9520, max=70115, avg=11532.73, stdev=3334.98 00:08:16.903 clat (usec): min=140, max=348, avg=172.46, stdev=19.63 00:08:16.903 lat (usec): min=151, max=418, avg=183.99, stdev=21.00 00:08:16.903 clat percentiles (usec): 00:08:16.903 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 155], 20.00th=[ 159], 00:08:16.903 | 30.00th=[ 163], 40.00th=[ 165], 50.00th=[ 169], 60.00th=[ 174], 00:08:16.903 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 192], 95.00th=[ 200], 00:08:16.903 | 99.00th=[ 233], 99.50th=[ 281], 99.90th=[ 351], 99.95th=[ 351], 00:08:16.903 | 99.99th=[ 351] 00:08:16.903 bw ( KiB/s): min= 4096, max= 4096, per=20.74%, avg=4096.00, stdev= 0.00, samples=1 00:08:16.903 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:16.903 lat (usec) : 250=94.94%, 500=0.94% 00:08:16.903 lat (msec) : 50=4.12% 00:08:16.903 cpu : usr=0.20%, sys=0.70%, ctx=534, majf=0, minf=2 00:08:16.903 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:16.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:16.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:16.903 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:16.903 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:16.903 job3: (groupid=0, jobs=1): err= 0: pid=2215570: Wed Nov 20 08:52:32 2024 00:08:16.903 read: IOPS=34, BW=139KiB/s (142kB/s)(144KiB/1037msec) 00:08:16.903 slat (nsec): min=7498, max=28385, avg=15007.81, stdev=6365.34 00:08:16.903 clat (usec): min=232, max=42151, avg=25482.21, stdev=20364.10 00:08:16.903 lat (usec): min=240, max=42162, avg=25497.22, stdev=20366.93 00:08:16.903 clat percentiles (usec): 00:08:16.903 | 1.00th=[ 233], 5.00th=[ 235], 10.00th=[ 302], 20.00th=[ 330], 00:08:16.903 | 30.00th=[ 351], 40.00th=[40633], 50.00th=[41157], 60.00th=[41157], 00:08:16.903 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:08:16.903 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:16.903 | 99.99th=[42206] 00:08:16.903 write: IOPS=493, BW=1975KiB/s (2022kB/s)(2048KiB/1037msec); 0 zone resets 00:08:16.903 slat (nsec): min=11995, max=38896, avg=13087.41, stdev=1803.78 00:08:16.903 clat (usec): min=137, max=355, avg=215.82, stdev=27.78 00:08:16.903 lat (usec): min=150, max=394, avg=228.91, stdev=28.19 00:08:16.903 clat percentiles (usec): 00:08:16.903 | 1.00th=[ 169], 5.00th=[ 180], 10.00th=[ 184], 20.00th=[ 190], 00:08:16.903 | 30.00th=[ 196], 40.00th=[ 208], 50.00th=[ 219], 60.00th=[ 223], 00:08:16.903 | 70.00th=[ 229], 80.00th=[ 235], 90.00th=[ 245], 95.00th=[ 253], 00:08:16.903 | 99.00th=[ 306], 99.50th=[ 351], 99.90th=[ 355], 99.95th=[ 355], 00:08:16.903 | 99.99th=[ 355] 00:08:16.903 bw ( KiB/s): min= 4096, max= 4096, per=20.74%, avg=4096.00, stdev= 0.00, samples=1 00:08:16.903 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:16.903 lat (usec) : 250=88.32%, 500=7.66% 00:08:16.903 lat (msec) : 50=4.01% 00:08:16.903 cpu : usr=0.39%, sys=0.58%, ctx=549, majf=0, minf=1 00:08:16.903 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:16.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:16.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:16.903 issued rwts: total=36,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:16.903 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:16.903 00:08:16.903 Run status group 0 (all jobs): 00:08:16.903 READ: bw=13.7MiB/s (14.4MB/s), 87.6KiB/s-9.91MiB/s (89.7kB/s-10.4MB/s), io=14.2MiB (14.9MB), run=1001-1037msec 00:08:16.903 WRITE: bw=19.3MiB/s (20.2MB/s), 1975KiB/s-9.99MiB/s (2022kB/s-10.5MB/s), io=20.0MiB (21.0MB), run=1001-1037msec 00:08:16.903 00:08:16.903 Disk stats (read/write): 00:08:16.903 nvme0n1: ios=2029/2048, merge=0/0, ticks=1402/316, in_queue=1718, util=96.99% 00:08:16.903 nvme0n2: ios=1054/1536, merge=0/0, ticks=1439/287, in_queue=1726, util=97.42% 00:08:16.903 nvme0n3: ios=17/512, merge=0/0, ticks=705/77, in_queue=782, util=87.47% 00:08:16.903 nvme0n4: ios=52/512, merge=0/0, ticks=1571/108, in_queue=1679, util=97.12% 00:08:16.903 08:52:32 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:08:16.903 [global] 00:08:16.903 thread=1 00:08:16.903 invalidate=1 00:08:16.903 rw=randwrite 00:08:16.903 time_based=1 00:08:16.903 runtime=1 00:08:16.903 ioengine=libaio 00:08:16.903 direct=1 00:08:16.903 bs=4096 00:08:16.903 iodepth=1 00:08:16.903 norandommap=0 00:08:16.903 numjobs=1 00:08:16.903 00:08:16.903 verify_dump=1 00:08:16.903 verify_backlog=512 00:08:16.903 verify_state_save=0 00:08:16.903 do_verify=1 00:08:16.903 verify=crc32c-intel 00:08:16.903 [job0] 00:08:16.903 filename=/dev/nvme0n1 00:08:16.903 [job1] 00:08:16.903 filename=/dev/nvme0n2 00:08:16.903 [job2] 00:08:16.903 filename=/dev/nvme0n3 00:08:16.903 [job3] 00:08:16.903 filename=/dev/nvme0n4 00:08:16.903 Could not set queue depth (nvme0n1) 00:08:16.903 Could not set queue depth (nvme0n2) 00:08:16.903 Could not set queue depth (nvme0n3) 00:08:16.903 Could not set queue depth (nvme0n4) 00:08:17.160 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:17.160 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:17.160 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:17.160 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:17.160 fio-3.35 00:08:17.160 Starting 4 threads 00:08:18.532 00:08:18.532 job0: (groupid=0, jobs=1): err= 0: pid=2215958: Wed Nov 20 08:52:34 2024 00:08:18.532 read: IOPS=1969, BW=7876KiB/s (8065kB/s)(7884KiB/1001msec) 00:08:18.532 slat (nsec): min=6857, max=37958, avg=8059.39, stdev=1134.22 00:08:18.532 clat (usec): min=186, max=565, avg=300.57, stdev=67.25 00:08:18.532 lat (usec): min=194, max=573, avg=308.63, stdev=67.30 00:08:18.532 clat percentiles (usec): 00:08:18.532 | 1.00th=[ 208], 5.00th=[ 225], 10.00th=[ 235], 20.00th=[ 255], 00:08:18.532 | 30.00th=[ 265], 40.00th=[ 273], 50.00th=[ 281], 60.00th=[ 289], 00:08:18.532 | 70.00th=[ 314], 80.00th=[ 347], 90.00th=[ 383], 95.00th=[ 465], 00:08:18.532 | 99.00th=[ 515], 99.50th=[ 529], 99.90th=[ 553], 99.95th=[ 570], 00:08:18.532 | 99.99th=[ 570] 00:08:18.532 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:08:18.532 slat (nsec): min=9691, max=63798, avg=10919.88, stdev=2129.51 00:08:18.532 clat (usec): min=118, max=508, avg=174.59, stdev=31.43 00:08:18.532 lat (usec): min=128, max=518, avg=185.51, stdev=31.58 00:08:18.532 clat percentiles (usec): 00:08:18.532 | 1.00th=[ 130], 5.00th=[ 139], 10.00th=[ 143], 20.00th=[ 151], 00:08:18.532 | 30.00th=[ 157], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 174], 00:08:18.532 | 70.00th=[ 180], 80.00th=[ 192], 90.00th=[ 227], 95.00th=[ 243], 00:08:18.532 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 285], 99.95th=[ 285], 00:08:18.532 | 99.99th=[ 510] 00:08:18.532 bw ( KiB/s): min= 8192, max= 8192, per=29.69%, avg=8192.00, stdev= 0.00, samples=1 00:08:18.532 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:18.532 lat (usec) : 250=58.00%, 500=41.25%, 750=0.75% 00:08:18.532 cpu : usr=4.00%, sys=5.60%, ctx=4020, majf=0, minf=1 00:08:18.532 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:18.532 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:18.532 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:18.532 issued rwts: total=1971,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:18.532 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:18.532 job1: (groupid=0, jobs=1): err= 0: pid=2215959: Wed Nov 20 08:52:34 2024 00:08:18.532 read: IOPS=2541, BW=9.93MiB/s (10.4MB/s)(9.94MiB/1001msec) 00:08:18.532 slat (nsec): min=7207, max=36090, avg=8212.61, stdev=1388.51 00:08:18.532 clat (usec): min=172, max=1197, avg=213.08, stdev=34.61 00:08:18.532 lat (usec): min=180, max=1206, avg=221.29, stdev=34.66 00:08:18.532 clat percentiles (usec): 00:08:18.532 | 1.00th=[ 180], 5.00th=[ 186], 10.00th=[ 190], 20.00th=[ 194], 00:08:18.532 | 30.00th=[ 198], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 212], 00:08:18.532 | 70.00th=[ 219], 80.00th=[ 233], 90.00th=[ 247], 95.00th=[ 255], 00:08:18.532 | 99.00th=[ 273], 99.50th=[ 285], 99.90th=[ 627], 99.95th=[ 873], 00:08:18.532 | 99.99th=[ 1205] 00:08:18.532 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:08:18.532 slat (nsec): min=10171, max=40533, avg=11514.61, stdev=1749.94 00:08:18.532 clat (usec): min=120, max=288, avg=153.23, stdev=12.46 00:08:18.532 lat (usec): min=131, max=327, avg=164.74, stdev=12.75 00:08:18.532 clat percentiles (usec): 00:08:18.532 | 1.00th=[ 130], 5.00th=[ 137], 10.00th=[ 139], 20.00th=[ 145], 00:08:18.532 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 155], 00:08:18.532 | 70.00th=[ 159], 80.00th=[ 163], 90.00th=[ 169], 95.00th=[ 174], 00:08:18.532 | 99.00th=[ 186], 99.50th=[ 200], 99.90th=[ 237], 99.95th=[ 237], 00:08:18.532 | 99.99th=[ 289] 00:08:18.532 bw ( KiB/s): min=12288, max=12288, per=44.53%, avg=12288.00, stdev= 0.00, samples=1 00:08:18.532 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:18.532 lat (usec) : 250=95.94%, 500=3.98%, 750=0.04%, 1000=0.02% 00:08:18.532 lat (msec) : 2=0.02% 00:08:18.533 cpu : usr=4.80%, sys=7.50%, ctx=5105, majf=0, minf=1 00:08:18.533 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:18.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:18.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:18.533 issued rwts: total=2544,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:18.533 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:18.533 job2: (groupid=0, jobs=1): err= 0: pid=2215960: Wed Nov 20 08:52:34 2024 00:08:18.533 read: IOPS=23, BW=92.4KiB/s (94.6kB/s)(96.0KiB/1039msec) 00:08:18.533 slat (nsec): min=12740, max=26800, avg=22056.17, stdev=3435.18 00:08:18.533 clat (usec): min=357, max=41089, avg=39248.46, stdev=8284.41 00:08:18.533 lat (usec): min=384, max=41111, avg=39270.52, stdev=8283.42 00:08:18.533 clat percentiles (usec): 00:08:18.533 | 1.00th=[ 359], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:08:18.533 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:18.533 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:08:18.533 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:18.533 | 99.99th=[41157] 00:08:18.533 write: IOPS=492, BW=1971KiB/s (2018kB/s)(2048KiB/1039msec); 0 zone resets 00:08:18.533 slat (nsec): min=10617, max=46056, avg=12537.65, stdev=2431.65 00:08:18.533 clat (usec): min=143, max=320, avg=172.66, stdev=16.28 00:08:18.533 lat (usec): min=154, max=366, avg=185.19, stdev=17.12 00:08:18.533 clat percentiles (usec): 00:08:18.533 | 1.00th=[ 149], 5.00th=[ 155], 10.00th=[ 157], 20.00th=[ 161], 00:08:18.533 | 30.00th=[ 165], 40.00th=[ 167], 50.00th=[ 172], 60.00th=[ 174], 00:08:18.533 | 70.00th=[ 178], 80.00th=[ 182], 90.00th=[ 190], 95.00th=[ 198], 00:08:18.533 | 99.00th=[ 233], 99.50th=[ 265], 99.90th=[ 322], 99.95th=[ 322], 00:08:18.533 | 99.99th=[ 322] 00:08:18.533 bw ( KiB/s): min= 4096, max= 4096, per=14.84%, avg=4096.00, stdev= 0.00, samples=1 00:08:18.533 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:08:18.533 lat (usec) : 250=94.96%, 500=0.75% 00:08:18.533 lat (msec) : 50=4.29% 00:08:18.533 cpu : usr=0.10%, sys=1.25%, ctx=537, majf=0, minf=1 00:08:18.533 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:18.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:18.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:18.533 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:18.533 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:18.533 job3: (groupid=0, jobs=1): err= 0: pid=2215961: Wed Nov 20 08:52:34 2024 00:08:18.533 read: IOPS=1561, BW=6246KiB/s (6396kB/s)(6252KiB/1001msec) 00:08:18.533 slat (nsec): min=8610, max=44456, avg=9800.55, stdev=1637.43 00:08:18.533 clat (usec): min=214, max=1356, avg=344.65, stdev=88.05 00:08:18.533 lat (usec): min=224, max=1366, avg=354.45, stdev=88.11 00:08:18.533 clat percentiles (usec): 00:08:18.533 | 1.00th=[ 237], 5.00th=[ 249], 10.00th=[ 260], 20.00th=[ 277], 00:08:18.533 | 30.00th=[ 289], 40.00th=[ 306], 50.00th=[ 322], 60.00th=[ 334], 00:08:18.533 | 70.00th=[ 355], 80.00th=[ 424], 90.00th=[ 486], 95.00th=[ 502], 00:08:18.533 | 99.00th=[ 529], 99.50th=[ 537], 99.90th=[ 996], 99.95th=[ 1352], 00:08:18.533 | 99.99th=[ 1352] 00:08:18.533 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:08:18.533 slat (nsec): min=10844, max=37685, avg=13438.15, stdev=1923.58 00:08:18.533 clat (usec): min=140, max=339, avg=198.53, stdev=38.09 00:08:18.533 lat (usec): min=153, max=352, avg=211.97, stdev=38.23 00:08:18.533 clat percentiles (usec): 00:08:18.533 | 1.00th=[ 147], 5.00th=[ 155], 10.00th=[ 161], 20.00th=[ 167], 00:08:18.533 | 30.00th=[ 172], 40.00th=[ 178], 50.00th=[ 184], 60.00th=[ 196], 00:08:18.533 | 70.00th=[ 225], 80.00th=[ 239], 90.00th=[ 247], 95.00th=[ 269], 00:08:18.533 | 99.00th=[ 306], 99.50th=[ 318], 99.90th=[ 334], 99.95th=[ 338], 00:08:18.533 | 99.99th=[ 338] 00:08:18.533 bw ( KiB/s): min= 8192, max= 8192, per=29.69%, avg=8192.00, stdev= 0.00, samples=1 00:08:18.533 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:18.533 lat (usec) : 250=54.03%, 500=43.40%, 750=2.46%, 1000=0.08% 00:08:18.533 lat (msec) : 2=0.03% 00:08:18.533 cpu : usr=2.80%, sys=7.00%, ctx=3613, majf=0, minf=1 00:08:18.533 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:18.533 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:18.533 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:18.533 issued rwts: total=1563,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:18.533 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:18.533 00:08:18.533 Run status group 0 (all jobs): 00:08:18.533 READ: bw=22.9MiB/s (24.1MB/s), 92.4KiB/s-9.93MiB/s (94.6kB/s-10.4MB/s), io=23.8MiB (25.0MB), run=1001-1039msec 00:08:18.533 WRITE: bw=26.9MiB/s (28.3MB/s), 1971KiB/s-9.99MiB/s (2018kB/s-10.5MB/s), io=28.0MiB (29.4MB), run=1001-1039msec 00:08:18.533 00:08:18.533 Disk stats (read/write): 00:08:18.533 nvme0n1: ios=1586/1969, merge=0/0, ticks=475/315, in_queue=790, util=86.77% 00:08:18.533 nvme0n2: ios=2075/2321, merge=0/0, ticks=1399/323, in_queue=1722, util=98.27% 00:08:18.533 nvme0n3: ios=62/512, merge=0/0, ticks=1661/88, in_queue=1749, util=97.08% 00:08:18.533 nvme0n4: ios=1533/1536, merge=0/0, ticks=1418/276, in_queue=1694, util=98.11% 00:08:18.533 08:52:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:08:18.533 [global] 00:08:18.533 thread=1 00:08:18.533 invalidate=1 00:08:18.533 rw=write 00:08:18.533 time_based=1 00:08:18.533 runtime=1 00:08:18.533 ioengine=libaio 00:08:18.533 direct=1 00:08:18.533 bs=4096 00:08:18.533 iodepth=128 00:08:18.533 norandommap=0 00:08:18.533 numjobs=1 00:08:18.533 00:08:18.533 verify_dump=1 00:08:18.533 verify_backlog=512 00:08:18.533 verify_state_save=0 00:08:18.533 do_verify=1 00:08:18.533 verify=crc32c-intel 00:08:18.533 [job0] 00:08:18.533 filename=/dev/nvme0n1 00:08:18.533 [job1] 00:08:18.533 filename=/dev/nvme0n2 00:08:18.533 [job2] 00:08:18.533 filename=/dev/nvme0n3 00:08:18.533 [job3] 00:08:18.533 filename=/dev/nvme0n4 00:08:18.533 Could not set queue depth (nvme0n1) 00:08:18.533 Could not set queue depth (nvme0n2) 00:08:18.533 Could not set queue depth (nvme0n3) 00:08:18.533 Could not set queue depth (nvme0n4) 00:08:18.803 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:18.803 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:18.803 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:18.803 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:18.803 fio-3.35 00:08:18.803 Starting 4 threads 00:08:20.191 00:08:20.191 job0: (groupid=0, jobs=1): err= 0: pid=2216330: Wed Nov 20 08:52:35 2024 00:08:20.191 read: IOPS=2524, BW=9.86MiB/s (10.3MB/s)(10.0MiB/1014msec) 00:08:20.191 slat (nsec): min=1382, max=10811k, avg=101757.79, stdev=673485.21 00:08:20.191 clat (usec): min=3634, max=46467, avg=12065.96, stdev=5296.40 00:08:20.191 lat (usec): min=3643, max=46477, avg=12167.72, stdev=5361.70 00:08:20.191 clat percentiles (usec): 00:08:20.191 | 1.00th=[ 6128], 5.00th=[ 8586], 10.00th=[ 9110], 20.00th=[ 9372], 00:08:20.191 | 30.00th=[ 9634], 40.00th=[10028], 50.00th=[10552], 60.00th=[10945], 00:08:20.191 | 70.00th=[11600], 80.00th=[13566], 90.00th=[16450], 95.00th=[23725], 00:08:20.191 | 99.00th=[35390], 99.50th=[41157], 99.90th=[46400], 99.95th=[46400], 00:08:20.191 | 99.99th=[46400] 00:08:20.191 write: IOPS=3027, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1014msec); 0 zone resets 00:08:20.191 slat (usec): min=2, max=45554, avg=223.71, stdev=2068.03 00:08:20.191 clat (usec): min=249, max=211629, avg=26710.72, stdev=31364.65 00:08:20.191 lat (usec): min=262, max=211638, avg=26934.43, stdev=31660.71 00:08:20.191 clat percentiles (usec): 00:08:20.191 | 1.00th=[ 1020], 5.00th=[ 2114], 10.00th=[ 3687], 20.00th=[ 7111], 00:08:20.191 | 30.00th=[ 8586], 40.00th=[ 10028], 50.00th=[ 13435], 60.00th=[ 21365], 00:08:20.191 | 70.00th=[ 31589], 80.00th=[ 42730], 90.00th=[ 54264], 95.00th=[ 92799], 00:08:20.191 | 99.00th=[141558], 99.50th=[166724], 99.90th=[210764], 99.95th=[210764], 00:08:20.191 | 99.99th=[210764] 00:08:20.191 bw ( KiB/s): min=11744, max=11800, per=17.99%, avg=11772.00, stdev=39.60, samples=2 00:08:20.191 iops : min= 2936, max= 2950, avg=2943.00, stdev= 9.90, samples=2 00:08:20.191 lat (usec) : 250=0.02%, 500=0.05%, 750=0.46% 00:08:20.191 lat (msec) : 2=1.69%, 4=3.46%, 10=32.13%, 20=34.80%, 50=21.30% 00:08:20.191 lat (msec) : 100=3.69%, 250=2.40% 00:08:20.191 cpu : usr=1.58%, sys=4.15%, ctx=310, majf=0, minf=1 00:08:20.191 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:08:20.191 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:20.191 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:20.191 issued rwts: total=2560,3070,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:20.191 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:20.191 job1: (groupid=0, jobs=1): err= 0: pid=2216331: Wed Nov 20 08:52:35 2024 00:08:20.191 read: IOPS=4039, BW=15.8MiB/s (16.5MB/s)(16.0MiB/1014msec) 00:08:20.191 slat (nsec): min=1416, max=14662k, avg=116497.84, stdev=819791.73 00:08:20.191 clat (usec): min=4250, max=48033, avg=13900.63, stdev=6764.31 00:08:20.191 lat (usec): min=4261, max=48039, avg=14017.13, stdev=6832.14 00:08:20.191 clat percentiles (usec): 00:08:20.191 | 1.00th=[ 5145], 5.00th=[ 8717], 10.00th=[ 8979], 20.00th=[ 9241], 00:08:20.191 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10945], 60.00th=[12911], 00:08:20.191 | 70.00th=[15533], 80.00th=[18220], 90.00th=[22152], 95.00th=[26084], 00:08:20.191 | 99.00th=[42730], 99.50th=[45351], 99.90th=[47973], 99.95th=[47973], 00:08:20.191 | 99.99th=[47973] 00:08:20.192 write: IOPS=4195, BW=16.4MiB/s (17.2MB/s)(16.6MiB/1014msec); 0 zone resets 00:08:20.192 slat (usec): min=2, max=11467, avg=113.55, stdev=604.05 00:08:20.192 clat (usec): min=1523, max=51628, avg=16834.63, stdev=9866.69 00:08:20.192 lat (usec): min=1535, max=51637, avg=16948.18, stdev=9929.17 00:08:20.192 clat percentiles (usec): 00:08:20.192 | 1.00th=[ 3064], 5.00th=[ 5014], 10.00th=[ 7439], 20.00th=[ 9765], 00:08:20.192 | 30.00th=[10421], 40.00th=[10814], 50.00th=[14615], 60.00th=[17433], 00:08:20.192 | 70.00th=[18482], 80.00th=[24249], 90.00th=[32637], 95.00th=[36963], 00:08:20.192 | 99.00th=[43254], 99.50th=[46924], 99.90th=[51643], 99.95th=[51643], 00:08:20.192 | 99.99th=[51643] 00:08:20.192 bw ( KiB/s): min=15984, max=17032, per=25.23%, avg=16508.00, stdev=741.05, samples=2 00:08:20.192 iops : min= 3996, max= 4258, avg=4127.00, stdev=185.26, samples=2 00:08:20.192 lat (msec) : 2=0.07%, 4=1.32%, 10=25.49%, 20=53.69%, 50=19.28% 00:08:20.192 lat (msec) : 100=0.16% 00:08:20.192 cpu : usr=3.65%, sys=4.24%, ctx=480, majf=0, minf=1 00:08:20.192 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:08:20.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:20.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:20.192 issued rwts: total=4096,4254,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:20.192 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:20.192 job2: (groupid=0, jobs=1): err= 0: pid=2216338: Wed Nov 20 08:52:35 2024 00:08:20.192 read: IOPS=3320, BW=13.0MiB/s (13.6MB/s)(13.0MiB/1006msec) 00:08:20.192 slat (nsec): min=1390, max=25114k, avg=125189.93, stdev=1052565.74 00:08:20.192 clat (usec): min=3211, max=54670, avg=16932.35, stdev=8516.66 00:08:20.192 lat (usec): min=6563, max=54688, avg=17057.54, stdev=8624.10 00:08:20.192 clat percentiles (usec): 00:08:20.192 | 1.00th=[ 7177], 5.00th=[ 9372], 10.00th=[10814], 20.00th=[11600], 00:08:20.192 | 30.00th=[12125], 40.00th=[12518], 50.00th=[12911], 60.00th=[13829], 00:08:20.192 | 70.00th=[17433], 80.00th=[22676], 90.00th=[31851], 95.00th=[36439], 00:08:20.192 | 99.00th=[39584], 99.50th=[41157], 99.90th=[45876], 99.95th=[53740], 00:08:20.192 | 99.99th=[54789] 00:08:20.192 write: IOPS=3562, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1006msec); 0 zone resets 00:08:20.192 slat (usec): min=2, max=12148, avg=142.60, stdev=732.99 00:08:20.192 clat (usec): min=5503, max=56120, avg=19736.68, stdev=10792.94 00:08:20.192 lat (usec): min=5532, max=56129, avg=19879.27, stdev=10847.55 00:08:20.192 clat percentiles (usec): 00:08:20.192 | 1.00th=[ 6915], 5.00th=[ 9634], 10.00th=[10683], 20.00th=[11207], 00:08:20.192 | 30.00th=[12125], 40.00th=[15533], 50.00th=[17433], 60.00th=[18482], 00:08:20.192 | 70.00th=[20317], 80.00th=[22938], 90.00th=[37487], 95.00th=[44827], 00:08:20.192 | 99.00th=[55837], 99.50th=[55837], 99.90th=[56361], 99.95th=[56361], 00:08:20.192 | 99.99th=[56361] 00:08:20.192 bw ( KiB/s): min=12808, max=15864, per=21.91%, avg=14336.00, stdev=2160.92, samples=2 00:08:20.192 iops : min= 3202, max= 3966, avg=3584.00, stdev=540.23, samples=2 00:08:20.192 lat (msec) : 4=0.01%, 10=6.33%, 20=66.72%, 50=25.77%, 100=1.17% 00:08:20.192 cpu : usr=2.89%, sys=3.78%, ctx=348, majf=0, minf=1 00:08:20.192 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:08:20.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:20.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:20.192 issued rwts: total=3340,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:20.192 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:20.192 job3: (groupid=0, jobs=1): err= 0: pid=2216339: Wed Nov 20 08:52:35 2024 00:08:20.192 read: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec) 00:08:20.192 slat (nsec): min=1446, max=11641k, avg=88940.27, stdev=576198.16 00:08:20.192 clat (usec): min=5307, max=38524, avg=11773.81, stdev=4581.79 00:08:20.192 lat (usec): min=5314, max=38565, avg=11862.75, stdev=4629.76 00:08:20.192 clat percentiles (usec): 00:08:20.192 | 1.00th=[ 6915], 5.00th=[ 8225], 10.00th=[ 8586], 20.00th=[ 9110], 00:08:20.192 | 30.00th=[ 9372], 40.00th=[ 9765], 50.00th=[10814], 60.00th=[11207], 00:08:20.192 | 70.00th=[11600], 80.00th=[12387], 90.00th=[17171], 95.00th=[22414], 00:08:20.192 | 99.00th=[30016], 99.50th=[31589], 99.90th=[34866], 99.95th=[35914], 00:08:20.192 | 99.99th=[38536] 00:08:20.192 write: IOPS=5666, BW=22.1MiB/s (23.2MB/s)(22.2MiB/1002msec); 0 zone resets 00:08:20.192 slat (usec): min=2, max=8384, avg=81.90, stdev=449.23 00:08:20.192 clat (usec): min=397, max=27133, avg=10643.29, stdev=2711.27 00:08:20.192 lat (usec): min=2671, max=27144, avg=10725.18, stdev=2746.57 00:08:20.192 clat percentiles (usec): 00:08:20.192 | 1.00th=[ 5473], 5.00th=[ 8455], 10.00th=[ 8848], 20.00th=[ 8979], 00:08:20.192 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[10290], 60.00th=[10683], 00:08:20.192 | 70.00th=[11076], 80.00th=[11600], 90.00th=[12780], 95.00th=[15926], 00:08:20.192 | 99.00th=[21365], 99.50th=[22938], 99.90th=[26084], 99.95th=[26084], 00:08:20.192 | 99.99th=[27132] 00:08:20.192 bw ( KiB/s): min=22008, max=22008, per=33.64%, avg=22008.00, stdev= 0.00, samples=1 00:08:20.192 iops : min= 5502, max= 5502, avg=5502.00, stdev= 0.00, samples=1 00:08:20.192 lat (usec) : 500=0.01% 00:08:20.192 lat (msec) : 4=0.26%, 10=43.32%, 20=52.17%, 50=4.24% 00:08:20.192 cpu : usr=3.90%, sys=7.69%, ctx=484, majf=0, minf=1 00:08:20.192 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:08:20.192 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:20.192 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:20.192 issued rwts: total=5632,5678,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:20.192 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:20.192 00:08:20.192 Run status group 0 (all jobs): 00:08:20.192 READ: bw=60.2MiB/s (63.1MB/s), 9.86MiB/s-22.0MiB/s (10.3MB/s-23.0MB/s), io=61.0MiB (64.0MB), run=1002-1014msec 00:08:20.192 WRITE: bw=63.9MiB/s (67.0MB/s), 11.8MiB/s-22.1MiB/s (12.4MB/s-23.2MB/s), io=64.8MiB (67.9MB), run=1002-1014msec 00:08:20.192 00:08:20.192 Disk stats (read/write): 00:08:20.192 nvme0n1: ios=2591/2607, merge=0/0, ticks=30528/37921, in_queue=68449, util=98.80% 00:08:20.192 nvme0n2: ios=3096/3439, merge=0/0, ticks=41482/58659, in_queue=100141, util=97.63% 00:08:20.192 nvme0n3: ios=2560/2886, merge=0/0, ticks=23857/36556, in_queue=60413, util=87.50% 00:08:20.192 nvme0n4: ios=4153/4608, merge=0/0, ticks=25543/23737, in_queue=49280, util=97.46% 00:08:20.192 08:52:35 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:08:20.192 [global] 00:08:20.192 thread=1 00:08:20.192 invalidate=1 00:08:20.192 rw=randwrite 00:08:20.192 time_based=1 00:08:20.192 runtime=1 00:08:20.192 ioengine=libaio 00:08:20.192 direct=1 00:08:20.192 bs=4096 00:08:20.192 iodepth=128 00:08:20.192 norandommap=0 00:08:20.192 numjobs=1 00:08:20.192 00:08:20.192 verify_dump=1 00:08:20.192 verify_backlog=512 00:08:20.192 verify_state_save=0 00:08:20.192 do_verify=1 00:08:20.192 verify=crc32c-intel 00:08:20.192 [job0] 00:08:20.192 filename=/dev/nvme0n1 00:08:20.192 [job1] 00:08:20.192 filename=/dev/nvme0n2 00:08:20.192 [job2] 00:08:20.192 filename=/dev/nvme0n3 00:08:20.192 [job3] 00:08:20.192 filename=/dev/nvme0n4 00:08:20.192 Could not set queue depth (nvme0n1) 00:08:20.192 Could not set queue depth (nvme0n2) 00:08:20.192 Could not set queue depth (nvme0n3) 00:08:20.192 Could not set queue depth (nvme0n4) 00:08:20.453 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:20.453 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:20.453 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:20.453 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:08:20.453 fio-3.35 00:08:20.453 Starting 4 threads 00:08:21.823 00:08:21.823 job0: (groupid=0, jobs=1): err= 0: pid=2216706: Wed Nov 20 08:52:37 2024 00:08:21.823 read: IOPS=4193, BW=16.4MiB/s (17.2MB/s)(16.5MiB/1007msec) 00:08:21.823 slat (nsec): min=1045, max=13552k, avg=128752.81, stdev=860911.99 00:08:21.823 clat (usec): min=2690, max=56644, avg=16069.82, stdev=10705.09 00:08:21.823 lat (usec): min=3855, max=56653, avg=16198.58, stdev=10775.56 00:08:21.823 clat percentiles (usec): 00:08:21.823 | 1.00th=[ 7242], 5.00th=[ 8356], 10.00th=[ 9503], 20.00th=[10290], 00:08:21.823 | 30.00th=[10814], 40.00th=[11600], 50.00th=[11863], 60.00th=[12256], 00:08:21.823 | 70.00th=[13829], 80.00th=[19530], 90.00th=[29492], 95.00th=[46400], 00:08:21.823 | 99.00th=[51119], 99.50th=[52691], 99.90th=[56361], 99.95th=[56886], 00:08:21.823 | 99.99th=[56886] 00:08:21.823 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:08:21.823 slat (nsec): min=1897, max=41458k, avg=85107.58, stdev=774270.06 00:08:21.823 clat (usec): min=224, max=53777, avg=12952.19, stdev=9214.57 00:08:21.823 lat (usec): min=260, max=53782, avg=13037.29, stdev=9241.02 00:08:21.823 clat percentiles (usec): 00:08:21.823 | 1.00th=[ 2245], 5.00th=[ 4752], 10.00th=[ 6587], 20.00th=[ 9241], 00:08:21.823 | 30.00th=[ 9765], 40.00th=[10290], 50.00th=[10552], 60.00th=[11076], 00:08:21.823 | 70.00th=[11863], 80.00th=[12256], 90.00th=[20841], 95.00th=[36963], 00:08:21.823 | 99.00th=[51643], 99.50th=[53216], 99.90th=[53740], 99.95th=[53740], 00:08:21.823 | 99.99th=[53740] 00:08:21.823 bw ( KiB/s): min=16376, max=20480, per=24.62%, avg=18428.00, stdev=2901.97, samples=2 00:08:21.823 iops : min= 4094, max= 5120, avg=4607.00, stdev=725.49, samples=2 00:08:21.823 lat (usec) : 250=0.01%, 500=0.02% 00:08:21.823 lat (msec) : 2=0.18%, 4=1.39%, 10=24.22%, 20=59.42%, 50=12.65% 00:08:21.823 lat (msec) : 100=2.11% 00:08:21.823 cpu : usr=2.19%, sys=4.08%, ctx=473, majf=0, minf=1 00:08:21.823 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:08:21.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:21.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:21.823 issued rwts: total=4223,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:21.823 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:21.823 job1: (groupid=0, jobs=1): err= 0: pid=2216708: Wed Nov 20 08:52:37 2024 00:08:21.823 read: IOPS=4603, BW=18.0MiB/s (18.9MB/s)(18.0MiB/1001msec) 00:08:21.823 slat (nsec): min=1478, max=25291k, avg=113683.99, stdev=817376.20 00:08:21.823 clat (usec): min=6997, max=99925, avg=14455.28, stdev=12642.92 00:08:21.823 lat (usec): min=7001, max=99937, avg=14568.96, stdev=12741.54 00:08:21.823 clat percentiles (msec): 00:08:21.823 | 1.00th=[ 8], 5.00th=[ 9], 10.00th=[ 10], 20.00th=[ 10], 00:08:21.823 | 30.00th=[ 11], 40.00th=[ 11], 50.00th=[ 11], 60.00th=[ 11], 00:08:21.823 | 70.00th=[ 12], 80.00th=[ 13], 90.00th=[ 21], 95.00th=[ 39], 00:08:21.823 | 99.00th=[ 82], 99.50th=[ 83], 99.90th=[ 94], 99.95th=[ 94], 00:08:21.823 | 99.99th=[ 101] 00:08:21.823 write: IOPS=4951, BW=19.3MiB/s (20.3MB/s)(19.4MiB/1001msec); 0 zone resets 00:08:21.823 slat (usec): min=2, max=7352, avg=90.78, stdev=436.68 00:08:21.823 clat (usec): min=510, max=37624, avg=12096.72, stdev=4573.95 00:08:21.823 lat (usec): min=3308, max=37627, avg=12187.49, stdev=4602.84 00:08:21.823 clat percentiles (usec): 00:08:21.823 | 1.00th=[ 6915], 5.00th=[ 9634], 10.00th=[ 9765], 20.00th=[10028], 00:08:21.823 | 30.00th=[10290], 40.00th=[10552], 50.00th=[10814], 60.00th=[11338], 00:08:21.823 | 70.00th=[11600], 80.00th=[11863], 90.00th=[20055], 95.00th=[21627], 00:08:21.823 | 99.00th=[33817], 99.50th=[35390], 99.90th=[37487], 99.95th=[37487], 00:08:21.823 | 99.99th=[37487] 00:08:21.823 bw ( KiB/s): min=16384, max=16384, per=21.89%, avg=16384.00, stdev= 0.00, samples=1 00:08:21.823 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=1 00:08:21.823 lat (usec) : 750=0.01% 00:08:21.823 lat (msec) : 4=0.44%, 10=20.49%, 20=68.68%, 50=8.51%, 100=1.86% 00:08:21.823 cpu : usr=4.10%, sys=4.90%, ctx=490, majf=0, minf=1 00:08:21.823 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:08:21.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:21.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:21.823 issued rwts: total=4608,4956,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:21.823 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:21.823 job2: (groupid=0, jobs=1): err= 0: pid=2216709: Wed Nov 20 08:52:37 2024 00:08:21.823 read: IOPS=4420, BW=17.3MiB/s (18.1MB/s)(17.4MiB/1007msec) 00:08:21.823 slat (nsec): min=1427, max=22020k, avg=119683.73, stdev=852880.47 00:08:21.823 clat (usec): min=420, max=84986, avg=15558.69, stdev=11442.82 00:08:21.823 lat (usec): min=7988, max=85001, avg=15678.38, stdev=11503.19 00:08:21.823 clat percentiles (usec): 00:08:21.823 | 1.00th=[ 8717], 5.00th=[10028], 10.00th=[10552], 20.00th=[11207], 00:08:21.823 | 30.00th=[11469], 40.00th=[11731], 50.00th=[12125], 60.00th=[12649], 00:08:21.823 | 70.00th=[13435], 80.00th=[13829], 90.00th=[21365], 95.00th=[48497], 00:08:21.823 | 99.00th=[66847], 99.50th=[68682], 99.90th=[84411], 99.95th=[85459], 00:08:21.823 | 99.99th=[85459] 00:08:21.823 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:08:21.823 slat (usec): min=2, max=12048, avg=96.56, stdev=520.59 00:08:21.823 clat (usec): min=7661, max=32569, avg=12680.20, stdev=2417.88 00:08:21.823 lat (usec): min=7671, max=32581, avg=12776.76, stdev=2465.41 00:08:21.823 clat percentiles (usec): 00:08:21.823 | 1.00th=[ 8586], 5.00th=[10552], 10.00th=[10945], 20.00th=[11207], 00:08:21.823 | 30.00th=[11469], 40.00th=[11600], 50.00th=[11863], 60.00th=[12256], 00:08:21.823 | 70.00th=[13042], 80.00th=[13698], 90.00th=[15795], 95.00th=[17171], 00:08:21.823 | 99.00th=[21103], 99.50th=[21627], 99.90th=[32637], 99.95th=[32637], 00:08:21.823 | 99.99th=[32637] 00:08:21.823 bw ( KiB/s): min=16384, max=20480, per=24.62%, avg=18432.00, stdev=2896.31, samples=2 00:08:21.823 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:08:21.823 lat (usec) : 500=0.01% 00:08:21.823 lat (msec) : 10=4.11%, 20=89.44%, 50=4.16%, 100=2.29% 00:08:21.823 cpu : usr=3.58%, sys=5.57%, ctx=468, majf=0, minf=1 00:08:21.823 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:08:21.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:21.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:21.823 issued rwts: total=4451,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:21.823 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:21.823 job3: (groupid=0, jobs=1): err= 0: pid=2216710: Wed Nov 20 08:52:37 2024 00:08:21.823 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:08:21.823 slat (nsec): min=1144, max=11016k, avg=96968.85, stdev=619235.00 00:08:21.823 clat (usec): min=1248, max=42345, avg=12651.37, stdev=3101.69 00:08:21.823 lat (usec): min=1255, max=42347, avg=12748.33, stdev=3141.47 00:08:21.823 clat percentiles (usec): 00:08:21.823 | 1.00th=[ 2966], 5.00th=[ 7046], 10.00th=[ 9372], 20.00th=[11076], 00:08:21.823 | 30.00th=[11600], 40.00th=[12387], 50.00th=[13042], 60.00th=[13435], 00:08:21.823 | 70.00th=[13829], 80.00th=[14353], 90.00th=[15795], 95.00th=[17171], 00:08:21.823 | 99.00th=[19268], 99.50th=[20317], 99.90th=[42206], 99.95th=[42206], 00:08:21.823 | 99.99th=[42206] 00:08:21.823 write: IOPS=4650, BW=18.2MiB/s (19.0MB/s)(18.3MiB/1005msec); 0 zone resets 00:08:21.823 slat (nsec): min=1854, max=10466k, avg=111790.44, stdev=642115.80 00:08:21.823 clat (usec): min=542, max=42329, avg=14686.01, stdev=4855.24 00:08:21.823 lat (usec): min=4473, max=42332, avg=14797.80, stdev=4892.93 00:08:21.823 clat percentiles (usec): 00:08:21.823 | 1.00th=[ 5014], 5.00th=[10159], 10.00th=[10814], 20.00th=[11731], 00:08:21.823 | 30.00th=[12125], 40.00th=[13042], 50.00th=[13304], 60.00th=[13698], 00:08:21.823 | 70.00th=[14222], 80.00th=[19268], 90.00th=[21627], 95.00th=[24249], 00:08:21.823 | 99.00th=[32113], 99.50th=[33162], 99.90th=[34866], 99.95th=[34866], 00:08:21.823 | 99.99th=[42206] 00:08:21.823 bw ( KiB/s): min=16384, max=20480, per=24.62%, avg=18432.00, stdev=2896.31, samples=2 00:08:21.823 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:08:21.823 lat (usec) : 750=0.01% 00:08:21.823 lat (msec) : 2=0.46%, 4=0.79%, 10=6.42%, 20=82.50%, 50=9.81% 00:08:21.823 cpu : usr=3.39%, sys=5.28%, ctx=381, majf=0, minf=1 00:08:21.823 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:08:21.823 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:21.823 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:21.823 issued rwts: total=4608,4674,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:21.823 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:21.823 00:08:21.824 Run status group 0 (all jobs): 00:08:21.824 READ: bw=69.4MiB/s (72.8MB/s), 16.4MiB/s-18.0MiB/s (17.2MB/s-18.9MB/s), io=69.9MiB (73.3MB), run=1001-1007msec 00:08:21.824 WRITE: bw=73.1MiB/s (76.7MB/s), 17.9MiB/s-19.3MiB/s (18.7MB/s-20.3MB/s), io=73.6MiB (77.2MB), run=1001-1007msec 00:08:21.824 00:08:21.824 Disk stats (read/write): 00:08:21.824 nvme0n1: ios=3411/3584, merge=0/0, ticks=25254/22696, in_queue=47950, util=90.58% 00:08:21.824 nvme0n2: ios=3881/4096, merge=0/0, ticks=19866/15678, in_queue=35544, util=93.91% 00:08:21.824 nvme0n3: ios=4130/4096, merge=0/0, ticks=19757/16159, in_queue=35916, util=97.82% 00:08:21.824 nvme0n4: ios=3628/4096, merge=0/0, ticks=26792/34447, in_queue=61239, util=94.97% 00:08:21.824 08:52:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:08:21.824 08:52:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2216943 00:08:21.824 08:52:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:08:21.824 08:52:37 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:08:21.824 [global] 00:08:21.824 thread=1 00:08:21.824 invalidate=1 00:08:21.824 rw=read 00:08:21.824 time_based=1 00:08:21.824 runtime=10 00:08:21.824 ioengine=libaio 00:08:21.824 direct=1 00:08:21.824 bs=4096 00:08:21.824 iodepth=1 00:08:21.824 norandommap=1 00:08:21.824 numjobs=1 00:08:21.824 00:08:21.824 [job0] 00:08:21.824 filename=/dev/nvme0n1 00:08:21.824 [job1] 00:08:21.824 filename=/dev/nvme0n2 00:08:21.824 [job2] 00:08:21.824 filename=/dev/nvme0n3 00:08:21.824 [job3] 00:08:21.824 filename=/dev/nvme0n4 00:08:21.824 Could not set queue depth (nvme0n1) 00:08:21.824 Could not set queue depth (nvme0n2) 00:08:21.824 Could not set queue depth (nvme0n3) 00:08:21.824 Could not set queue depth (nvme0n4) 00:08:21.824 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:21.824 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:21.824 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:21.824 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:21.824 fio-3.35 00:08:21.824 Starting 4 threads 00:08:25.098 08:52:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:08:25.098 08:52:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:08:25.098 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=274432, buflen=4096 00:08:25.098 fio: pid=2217092, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:25.098 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=16793600, buflen=4096 00:08:25.098 fio: pid=2217091, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:25.098 08:52:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:25.098 08:52:40 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:08:25.098 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=15781888, buflen=4096 00:08:25.098 fio: pid=2217089, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:25.356 08:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:25.356 08:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:08:25.356 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=19873792, buflen=4096 00:08:25.356 fio: pid=2217090, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:08:25.356 08:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:25.356 08:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:08:25.356 00:08:25.356 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2217089: Wed Nov 20 08:52:41 2024 00:08:25.356 read: IOPS=1218, BW=4874KiB/s (4991kB/s)(15.1MiB/3162msec) 00:08:25.356 slat (usec): min=6, max=29793, avg=25.10, stdev=577.05 00:08:25.356 clat (usec): min=167, max=41173, avg=787.84, stdev=4607.69 00:08:25.356 lat (usec): min=175, max=41195, avg=812.95, stdev=4643.68 00:08:25.356 clat percentiles (usec): 00:08:25.356 | 1.00th=[ 182], 5.00th=[ 204], 10.00th=[ 229], 20.00th=[ 245], 00:08:25.356 | 30.00th=[ 253], 40.00th=[ 258], 50.00th=[ 265], 60.00th=[ 269], 00:08:25.356 | 70.00th=[ 273], 80.00th=[ 277], 90.00th=[ 289], 95.00th=[ 297], 00:08:25.356 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:25.356 | 99.99th=[41157] 00:08:25.356 bw ( KiB/s): min= 96, max=13441, per=29.43%, avg=4480.17, stdev=6789.96, samples=6 00:08:25.356 iops : min= 24, max= 3360, avg=1120.00, stdev=1697.42, samples=6 00:08:25.356 lat (usec) : 250=26.80%, 500=71.82%, 750=0.03% 00:08:25.356 lat (msec) : 2=0.03%, 50=1.30% 00:08:25.356 cpu : usr=0.79%, sys=1.90%, ctx=3858, majf=0, minf=2 00:08:25.356 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:25.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:25.356 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:25.356 issued rwts: total=3854,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:25.356 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:25.356 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2217090: Wed Nov 20 08:52:41 2024 00:08:25.356 read: IOPS=1434, BW=5739KiB/s (5876kB/s)(19.0MiB/3382msec) 00:08:25.356 slat (usec): min=6, max=15663, avg=24.53, stdev=475.22 00:08:25.356 clat (usec): min=157, max=41204, avg=665.96, stdev=4115.34 00:08:25.356 lat (usec): min=164, max=41225, avg=690.49, stdev=4142.81 00:08:25.356 clat percentiles (usec): 00:08:25.356 | 1.00th=[ 169], 5.00th=[ 178], 10.00th=[ 184], 20.00th=[ 196], 00:08:25.356 | 30.00th=[ 233], 40.00th=[ 249], 50.00th=[ 258], 60.00th=[ 265], 00:08:25.356 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 293], 00:08:25.356 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:25.356 | 99.99th=[41157] 00:08:25.356 bw ( KiB/s): min= 96, max=14199, per=30.29%, avg=4611.83, stdev=7001.75, samples=6 00:08:25.356 iops : min= 24, max= 3549, avg=1152.83, stdev=1750.23, samples=6 00:08:25.356 lat (usec) : 250=42.06%, 500=56.85%, 750=0.02% 00:08:25.356 lat (msec) : 10=0.02%, 50=1.03% 00:08:25.356 cpu : usr=0.68%, sys=2.40%, ctx=4859, majf=0, minf=1 00:08:25.356 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:25.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:25.356 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:25.356 issued rwts: total=4853,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:25.356 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:25.356 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2217091: Wed Nov 20 08:52:41 2024 00:08:25.356 read: IOPS=1387, BW=5548KiB/s (5681kB/s)(16.0MiB/2956msec) 00:08:25.356 slat (nsec): min=7751, max=41440, avg=8954.53, stdev=2079.80 00:08:25.356 clat (usec): min=172, max=41174, avg=704.49, stdev=4475.20 00:08:25.356 lat (usec): min=181, max=41186, avg=713.44, stdev=4476.78 00:08:25.356 clat percentiles (usec): 00:08:25.356 | 1.00th=[ 184], 5.00th=[ 188], 10.00th=[ 192], 20.00th=[ 196], 00:08:25.356 | 30.00th=[ 200], 40.00th=[ 204], 50.00th=[ 206], 60.00th=[ 210], 00:08:25.356 | 70.00th=[ 215], 80.00th=[ 219], 90.00th=[ 225], 95.00th=[ 231], 00:08:25.356 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:25.356 | 99.99th=[41157] 00:08:25.356 bw ( KiB/s): min= 96, max=16296, per=21.93%, avg=3339.20, stdev=7243.07, samples=5 00:08:25.356 iops : min= 24, max= 4074, avg=834.80, stdev=1810.77, samples=5 00:08:25.356 lat (usec) : 250=98.29%, 500=0.46% 00:08:25.356 lat (msec) : 50=1.22% 00:08:25.356 cpu : usr=0.61%, sys=2.57%, ctx=4104, majf=0, minf=1 00:08:25.356 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:25.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:25.356 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:25.356 issued rwts: total=4101,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:25.356 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:25.356 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2217092: Wed Nov 20 08:52:41 2024 00:08:25.356 read: IOPS=24, BW=97.1KiB/s (99.4kB/s)(268KiB/2761msec) 00:08:25.356 slat (nsec): min=14273, max=32965, avg=25550.50, stdev=1774.08 00:08:25.356 clat (usec): min=40754, max=41948, avg=41044.54, stdev=262.42 00:08:25.356 lat (usec): min=40780, max=41974, avg=41070.08, stdev=262.28 00:08:25.356 clat percentiles (usec): 00:08:25.356 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:08:25.356 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:08:25.356 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41681], 00:08:25.356 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:08:25.356 | 99.99th=[42206] 00:08:25.356 bw ( KiB/s): min= 96, max= 96, per=0.63%, avg=96.00, stdev= 0.00, samples=5 00:08:25.356 iops : min= 24, max= 24, avg=24.00, stdev= 0.00, samples=5 00:08:25.356 lat (msec) : 50=98.53% 00:08:25.356 cpu : usr=0.00%, sys=0.14%, ctx=71, majf=0, minf=2 00:08:25.356 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:25.356 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:25.356 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:25.356 issued rwts: total=68,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:25.356 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:25.356 00:08:25.356 Run status group 0 (all jobs): 00:08:25.356 READ: bw=14.9MiB/s (15.6MB/s), 97.1KiB/s-5739KiB/s (99.4kB/s-5876kB/s), io=50.3MiB (52.7MB), run=2761-3382msec 00:08:25.356 00:08:25.356 Disk stats (read/write): 00:08:25.356 nvme0n1: ios=3702/0, merge=0/0, ticks=2973/0, in_queue=2973, util=93.84% 00:08:25.356 nvme0n2: ios=4830/0, merge=0/0, ticks=3175/0, in_queue=3175, util=94.17% 00:08:25.356 nvme0n3: ios=3772/0, merge=0/0, ticks=3604/0, in_queue=3604, util=98.82% 00:08:25.356 nvme0n4: ios=101/0, merge=0/0, ticks=3426/0, in_queue=3426, util=100.00% 00:08:25.614 08:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:25.614 08:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:08:25.871 08:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:25.871 08:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:08:26.129 08:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:26.129 08:52:41 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:08:26.387 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:08:26.387 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:08:26.387 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:08:26.387 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2216943 00:08:26.387 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:08:26.387 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:26.645 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:08:26.645 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:26.645 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:08:26.645 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:26.645 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:26.645 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:26.645 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:26.645 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:08:26.645 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:08:26.645 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:08:26.645 nvmf hotplug test: fio failed as expected 00:08:26.645 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:26.904 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:08:26.904 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:08:26.904 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:08:26.904 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:08:26.904 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:08:26.904 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:08:26.904 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@99 -- # sync 00:08:26.904 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:08:26.904 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # set +e 00:08:26.904 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:08:26.904 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:08:26.904 rmmod nvme_tcp 00:08:26.904 rmmod nvme_fabrics 00:08:26.904 rmmod nvme_keyring 00:08:26.904 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:08:26.904 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # set -e 00:08:26.904 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # return 0 00:08:26.904 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # '[' -n 2214014 ']' 00:08:26.904 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@337 -- # killprocess 2214014 00:08:26.904 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2214014 ']' 00:08:26.904 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2214014 00:08:26.904 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:08:26.904 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:26.904 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2214014 00:08:26.904 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:26.904 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:26.904 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2214014' 00:08:26.904 killing process with pid 2214014 00:08:26.904 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2214014 00:08:26.904 08:52:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2214014 00:08:27.163 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:08:27.163 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # nvmf_fini 00:08:27.163 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@264 -- # local dev 00:08:27.163 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@267 -- # remove_target_ns 00:08:27.163 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:27.163 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:27.163 08:52:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:29.070 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@268 -- # delete_main_bridge 00:08:29.070 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:08:29.070 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@130 -- # return 0 00:08:29.070 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:08:29.070 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:08:29.070 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:08:29.070 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:08:29.070 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:08:29.070 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:08:29.070 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:08:29.070 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:08:29.070 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:08:29.070 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:08:29.070 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:08:29.070 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:08:29.070 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:08:29.070 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:08:29.070 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:08:29.070 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:08:29.070 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:08:29.070 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@41 -- # _dev=0 00:08:29.070 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@41 -- # dev_map=() 00:08:29.070 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@284 -- # iptr 00:08:29.070 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@542 -- # iptables-save 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@542 -- # iptables-restore 00:08:29.329 00:08:29.329 real 0m27.233s 00:08:29.329 user 1m48.219s 00:08:29.329 sys 0m8.563s 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:08:29.329 ************************************ 00:08:29.329 END TEST nvmf_fio_target 00:08:29.329 ************************************ 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:29.329 ************************************ 00:08:29.329 START TEST nvmf_bdevio 00:08:29.329 ************************************ 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:08:29.329 * Looking for test storage... 00:08:29.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:29.329 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:29.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.330 --rc genhtml_branch_coverage=1 00:08:29.330 --rc genhtml_function_coverage=1 00:08:29.330 --rc genhtml_legend=1 00:08:29.330 --rc geninfo_all_blocks=1 00:08:29.330 --rc geninfo_unexecuted_blocks=1 00:08:29.330 00:08:29.330 ' 00:08:29.330 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:29.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.330 --rc genhtml_branch_coverage=1 00:08:29.330 --rc genhtml_function_coverage=1 00:08:29.330 --rc genhtml_legend=1 00:08:29.330 --rc geninfo_all_blocks=1 00:08:29.330 --rc geninfo_unexecuted_blocks=1 00:08:29.330 00:08:29.330 ' 00:08:29.330 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:29.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.330 --rc genhtml_branch_coverage=1 00:08:29.330 --rc genhtml_function_coverage=1 00:08:29.330 --rc genhtml_legend=1 00:08:29.330 --rc geninfo_all_blocks=1 00:08:29.330 --rc geninfo_unexecuted_blocks=1 00:08:29.330 00:08:29.330 ' 00:08:29.330 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:29.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:29.330 --rc genhtml_branch_coverage=1 00:08:29.330 --rc genhtml_function_coverage=1 00:08:29.330 --rc genhtml_legend=1 00:08:29.330 --rc geninfo_all_blocks=1 00:08:29.330 --rc geninfo_unexecuted_blocks=1 00:08:29.330 00:08:29.330 ' 00:08:29.330 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:29.330 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:08:29.330 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:29.330 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@50 -- # : 0 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:08:29.588 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@54 -- # have_pci_nics=0 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # prepare_net_devs 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # local -g is_hw=no 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # remove_target_ns 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # xtrace_disable 00:08:29.588 08:52:45 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:36.163 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:36.163 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@131 -- # pci_devs=() 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@131 -- # local -a pci_devs 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@132 -- # pci_net_devs=() 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@133 -- # pci_drivers=() 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@133 -- # local -A pci_drivers 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@135 -- # net_devs=() 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@135 -- # local -ga net_devs 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@136 -- # e810=() 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@136 -- # local -ga e810 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@137 -- # x722=() 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@137 -- # local -ga x722 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@138 -- # mlx=() 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@138 -- # local -ga mlx 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:36.164 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:36.164 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:36.164 Found net devices under 0000:86:00.0: cvl_0_0 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:36.164 Found net devices under 0000:86:00.1: cvl_0_1 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # is_hw=yes 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@257 -- # create_target_ns 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@27 -- # local -gA dev_map 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@28 -- # local -g _dev 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # ips=() 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:08:36.164 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772161 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:08:36.165 10.0.0.1 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772162 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:08:36.165 10.0.0.2 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@38 -- # ping_ips 1 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=initiator0 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:08:36.165 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:36.165 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.408 ms 00:08:36.165 00:08:36.165 --- 10.0.0.1 ping statistics --- 00:08:36.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.165 rtt min/avg/max/mdev = 0.408/0.408/0.408/0.000 ms 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target0 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target0 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:08:36.165 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:36.165 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:08:36.165 00:08:36.165 --- 10.0.0.2 ping statistics --- 00:08:36.165 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:36.165 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair++ )) 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # return 0 00:08:36.165 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=initiator0 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=initiator1 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # return 1 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev= 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@169 -- # return 0 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target0 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target0 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target1 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target1 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # return 1 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev= 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@169 -- # return 0 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # nvmfpid=2221584 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # waitforlisten 2221584 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2221584 ']' 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:36.166 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:36.166 [2024-11-20 08:52:51.556932] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:08:36.166 [2024-11-20 08:52:51.556991] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:36.166 [2024-11-20 08:52:51.631898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:36.166 [2024-11-20 08:52:51.674060] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:36.166 [2024-11-20 08:52:51.674097] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:36.166 [2024-11-20 08:52:51.674104] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:36.166 [2024-11-20 08:52:51.674110] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:36.166 [2024-11-20 08:52:51.674115] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:36.166 [2024-11-20 08:52:51.675761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:36.166 [2024-11-20 08:52:51.675795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:36.166 [2024-11-20 08:52:51.675901] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:36.166 [2024-11-20 08:52:51.675902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:36.166 [2024-11-20 08:52:51.810961] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.166 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:36.166 Malloc0 00:08:36.167 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.167 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:36.167 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.167 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:36.167 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.167 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:36.167 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.167 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:36.167 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.167 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:36.167 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.167 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:36.167 [2024-11-20 08:52:51.868689] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:36.167 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.167 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:08:36.167 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:08:36.167 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # config=() 00:08:36.167 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # local subsystem config 00:08:36.167 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:08:36.167 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:08:36.167 { 00:08:36.167 "params": { 00:08:36.167 "name": "Nvme$subsystem", 00:08:36.167 "trtype": "$TEST_TRANSPORT", 00:08:36.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:36.167 "adrfam": "ipv4", 00:08:36.167 "trsvcid": "$NVMF_PORT", 00:08:36.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:36.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:36.167 "hdgst": ${hdgst:-false}, 00:08:36.167 "ddgst": ${ddgst:-false} 00:08:36.167 }, 00:08:36.167 "method": "bdev_nvme_attach_controller" 00:08:36.167 } 00:08:36.167 EOF 00:08:36.167 )") 00:08:36.167 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # cat 00:08:36.167 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # jq . 00:08:36.167 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@397 -- # IFS=, 00:08:36.167 08:52:51 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:08:36.167 "params": { 00:08:36.167 "name": "Nvme1", 00:08:36.167 "trtype": "tcp", 00:08:36.167 "traddr": "10.0.0.2", 00:08:36.167 "adrfam": "ipv4", 00:08:36.167 "trsvcid": "4420", 00:08:36.167 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:36.167 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:36.167 "hdgst": false, 00:08:36.167 "ddgst": false 00:08:36.167 }, 00:08:36.167 "method": "bdev_nvme_attach_controller" 00:08:36.167 }' 00:08:36.167 [2024-11-20 08:52:51.905635] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:08:36.167 [2024-11-20 08:52:51.905682] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2221612 ] 00:08:36.167 [2024-11-20 08:52:51.982008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:36.167 [2024-11-20 08:52:52.027089] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:36.167 [2024-11-20 08:52:52.027197] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:36.167 [2024-11-20 08:52:52.027198] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:36.425 I/O targets: 00:08:36.425 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:08:36.425 00:08:36.425 00:08:36.425 CUnit - A unit testing framework for C - Version 2.1-3 00:08:36.425 http://cunit.sourceforge.net/ 00:08:36.425 00:08:36.425 00:08:36.425 Suite: bdevio tests on: Nvme1n1 00:08:36.425 Test: blockdev write read block ...passed 00:08:36.425 Test: blockdev write zeroes read block ...passed 00:08:36.425 Test: blockdev write zeroes read no split ...passed 00:08:36.425 Test: blockdev write zeroes read split ...passed 00:08:36.425 Test: blockdev write zeroes read split partial ...passed 00:08:36.425 Test: blockdev reset ...[2024-11-20 08:52:52.463179] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:08:36.425 [2024-11-20 08:52:52.463242] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x538340 (9): Bad file descriptor 00:08:36.683 [2024-11-20 08:52:52.516633] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:08:36.683 passed 00:08:36.683 Test: blockdev write read 8 blocks ...passed 00:08:36.683 Test: blockdev write read size > 128k ...passed 00:08:36.683 Test: blockdev write read invalid size ...passed 00:08:36.683 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:08:36.683 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:08:36.683 Test: blockdev write read max offset ...passed 00:08:36.683 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:08:36.683 Test: blockdev writev readv 8 blocks ...passed 00:08:36.683 Test: blockdev writev readv 30 x 1block ...passed 00:08:36.683 Test: blockdev writev readv block ...passed 00:08:36.683 Test: blockdev writev readv size > 128k ...passed 00:08:36.683 Test: blockdev writev readv size > 128k in two iovs ...passed 00:08:36.683 Test: blockdev comparev and writev ...[2024-11-20 08:52:52.687312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:36.683 [2024-11-20 08:52:52.687341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:08:36.683 [2024-11-20 08:52:52.687355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:36.683 [2024-11-20 08:52:52.687362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:08:36.683 [2024-11-20 08:52:52.687618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:36.683 [2024-11-20 08:52:52.687628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:08:36.683 [2024-11-20 08:52:52.687639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:36.683 [2024-11-20 08:52:52.687646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:08:36.683 [2024-11-20 08:52:52.687877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:36.683 [2024-11-20 08:52:52.687887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:08:36.683 [2024-11-20 08:52:52.687898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:36.683 [2024-11-20 08:52:52.687905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:08:36.683 [2024-11-20 08:52:52.688142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:36.683 [2024-11-20 08:52:52.688153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:08:36.683 [2024-11-20 08:52:52.688165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:08:36.683 [2024-11-20 08:52:52.688171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:08:36.941 passed 00:08:36.941 Test: blockdev nvme passthru rw ...passed 00:08:36.941 Test: blockdev nvme passthru vendor specific ...[2024-11-20 08:52:52.770320] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:36.941 [2024-11-20 08:52:52.770341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:08:36.941 [2024-11-20 08:52:52.770451] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:36.941 [2024-11-20 08:52:52.770460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:08:36.941 [2024-11-20 08:52:52.770574] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:36.941 [2024-11-20 08:52:52.770583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:08:36.941 [2024-11-20 08:52:52.770702] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:08:36.941 [2024-11-20 08:52:52.770715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:08:36.941 passed 00:08:36.941 Test: blockdev nvme admin passthru ...passed 00:08:36.941 Test: blockdev copy ...passed 00:08:36.941 00:08:36.941 Run Summary: Type Total Ran Passed Failed Inactive 00:08:36.941 suites 1 1 n/a 0 0 00:08:36.941 tests 23 23 23 0 0 00:08:36.941 asserts 152 152 152 0 n/a 00:08:36.941 00:08:36.941 Elapsed time = 0.955 seconds 00:08:36.941 08:52:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:36.941 08:52:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:36.941 08:52:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:36.941 08:52:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:36.941 08:52:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:08:36.941 08:52:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:08:36.941 08:52:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # nvmfcleanup 00:08:36.941 08:52:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@99 -- # sync 00:08:36.941 08:52:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:08:36.941 08:52:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # set +e 00:08:36.941 08:52:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # for i in {1..20} 00:08:36.941 08:52:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:08:36.941 rmmod nvme_tcp 00:08:37.200 rmmod nvme_fabrics 00:08:37.200 rmmod nvme_keyring 00:08:37.200 08:52:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:08:37.200 08:52:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # set -e 00:08:37.200 08:52:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # return 0 00:08:37.200 08:52:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # '[' -n 2221584 ']' 00:08:37.200 08:52:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@337 -- # killprocess 2221584 00:08:37.200 08:52:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2221584 ']' 00:08:37.200 08:52:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2221584 00:08:37.200 08:52:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:08:37.200 08:52:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:37.200 08:52:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2221584 00:08:37.200 08:52:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:08:37.200 08:52:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:08:37.200 08:52:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2221584' 00:08:37.200 killing process with pid 2221584 00:08:37.200 08:52:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2221584 00:08:37.200 08:52:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2221584 00:08:37.459 08:52:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:08:37.459 08:52:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # nvmf_fini 00:08:37.459 08:52:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@264 -- # local dev 00:08:37.459 08:52:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@267 -- # remove_target_ns 00:08:37.459 08:52:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:37.459 08:52:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:37.459 08:52:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@268 -- # delete_main_bridge 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@130 -- # return 0 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@41 -- # _dev=0 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@41 -- # dev_map=() 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@284 -- # iptr 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@542 -- # iptables-save 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@542 -- # iptables-restore 00:08:39.366 00:08:39.366 real 0m10.145s 00:08:39.366 user 0m10.338s 00:08:39.366 sys 0m5.079s 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:08:39.366 ************************************ 00:08:39.366 END TEST nvmf_bdevio 00:08:39.366 ************************************ 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # [[ tcp == \t\c\p ]] 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # [[ phy != phy ]] 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:39.366 08:52:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:39.626 ************************************ 00:08:39.626 START TEST nvmf_zcopy 00:08:39.626 ************************************ 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:39.626 * Looking for test storage... 00:08:39.626 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:39.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.626 --rc genhtml_branch_coverage=1 00:08:39.626 --rc genhtml_function_coverage=1 00:08:39.626 --rc genhtml_legend=1 00:08:39.626 --rc geninfo_all_blocks=1 00:08:39.626 --rc geninfo_unexecuted_blocks=1 00:08:39.626 00:08:39.626 ' 00:08:39.626 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:39.626 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.626 --rc genhtml_branch_coverage=1 00:08:39.627 --rc genhtml_function_coverage=1 00:08:39.627 --rc genhtml_legend=1 00:08:39.627 --rc geninfo_all_blocks=1 00:08:39.627 --rc geninfo_unexecuted_blocks=1 00:08:39.627 00:08:39.627 ' 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:39.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.627 --rc genhtml_branch_coverage=1 00:08:39.627 --rc genhtml_function_coverage=1 00:08:39.627 --rc genhtml_legend=1 00:08:39.627 --rc geninfo_all_blocks=1 00:08:39.627 --rc geninfo_unexecuted_blocks=1 00:08:39.627 00:08:39.627 ' 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:39.627 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:39.627 --rc genhtml_branch_coverage=1 00:08:39.627 --rc genhtml_function_coverage=1 00:08:39.627 --rc genhtml_legend=1 00:08:39.627 --rc geninfo_all_blocks=1 00:08:39.627 --rc geninfo_unexecuted_blocks=1 00:08:39.627 00:08:39.627 ' 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@50 -- # : 0 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:08:39.627 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@54 -- # have_pci_nics=0 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # prepare_net_devs 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # local -g is_hw=no 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # remove_target_ns 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # xtrace_disable 00:08:39.627 08:52:55 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@131 -- # pci_devs=() 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@131 -- # local -a pci_devs 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@132 -- # pci_net_devs=() 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@133 -- # pci_drivers=() 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@133 -- # local -A pci_drivers 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@135 -- # net_devs=() 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@135 -- # local -ga net_devs 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@136 -- # e810=() 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@136 -- # local -ga e810 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@137 -- # x722=() 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@137 -- # local -ga x722 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@138 -- # mlx=() 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@138 -- # local -ga mlx 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:46.281 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:46.281 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:08:46.281 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:46.282 Found net devices under 0000:86:00.0: cvl_0_0 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:46.282 Found net devices under 0000:86:00.1: cvl_0_1 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # is_hw=yes 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@257 -- # create_target_ns 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@27 -- # local -gA dev_map 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@28 -- # local -g _dev 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # ips=() 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772161 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:08:46.282 10.0.0.1 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772162 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:08:46.282 10.0.0.2 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:08:46.282 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@38 -- # ping_ips 1 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=initiator0 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:08:46.283 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:46.283 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.475 ms 00:08:46.283 00:08:46.283 --- 10.0.0.1 ping statistics --- 00:08:46.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.283 rtt min/avg/max/mdev = 0.475/0.475/0.475/0.000 ms 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev target0 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=target0 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:08:46.283 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:46.283 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:08:46.283 00:08:46.283 --- 10.0.0.2 ping statistics --- 00:08:46.283 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:46.283 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair++ )) 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # return 0 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=initiator0 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=initiator1 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # return 1 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev= 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@169 -- # return 0 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev target0 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=target0 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:08:46.283 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev target1 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=target1 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # return 1 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev= 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@169 -- # return 0 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # nvmfpid=2225394 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # waitforlisten 2225394 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2225394 ']' 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:46.284 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:46.284 08:53:01 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.284 [2024-11-20 08:53:01.815439] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:08:46.284 [2024-11-20 08:53:01.815490] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:46.284 [2024-11-20 08:53:01.893762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.284 [2024-11-20 08:53:01.935975] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:46.284 [2024-11-20 08:53:01.936013] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:46.284 [2024-11-20 08:53:01.936020] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:46.284 [2024-11-20 08:53:01.936027] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:46.284 [2024-11-20 08:53:01.936032] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:46.284 [2024-11-20 08:53:01.936627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.284 [2024-11-20 08:53:02.071804] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@20 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.284 [2024-11-20 08:53:02.091986] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.284 malloc0 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@28 -- # gen_nvmf_target_json 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:08:46.284 { 00:08:46.284 "params": { 00:08:46.284 "name": "Nvme$subsystem", 00:08:46.284 "trtype": "$TEST_TRANSPORT", 00:08:46.284 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:46.284 "adrfam": "ipv4", 00:08:46.284 "trsvcid": "$NVMF_PORT", 00:08:46.284 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:46.284 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:46.284 "hdgst": ${hdgst:-false}, 00:08:46.284 "ddgst": ${ddgst:-false} 00:08:46.284 }, 00:08:46.284 "method": "bdev_nvme_attach_controller" 00:08:46.284 } 00:08:46.284 EOF 00:08:46.284 )") 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:08:46.284 08:53:02 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:08:46.284 "params": { 00:08:46.284 "name": "Nvme1", 00:08:46.284 "trtype": "tcp", 00:08:46.284 "traddr": "10.0.0.2", 00:08:46.284 "adrfam": "ipv4", 00:08:46.284 "trsvcid": "4420", 00:08:46.284 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:46.284 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:46.284 "hdgst": false, 00:08:46.284 "ddgst": false 00:08:46.284 }, 00:08:46.284 "method": "bdev_nvme_attach_controller" 00:08:46.284 }' 00:08:46.284 [2024-11-20 08:53:02.176480] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:08:46.284 [2024-11-20 08:53:02.176523] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2225414 ] 00:08:46.284 [2024-11-20 08:53:02.249356] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:46.284 [2024-11-20 08:53:02.291784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:46.576 Running I/O for 10 seconds... 00:08:48.882 8430.00 IOPS, 65.86 MiB/s [2024-11-20T07:53:05.898Z] 8517.00 IOPS, 66.54 MiB/s [2024-11-20T07:53:06.833Z] 8529.00 IOPS, 66.63 MiB/s [2024-11-20T07:53:07.769Z] 8541.25 IOPS, 66.73 MiB/s [2024-11-20T07:53:08.706Z] 8548.80 IOPS, 66.79 MiB/s [2024-11-20T07:53:09.642Z] 8550.33 IOPS, 66.80 MiB/s [2024-11-20T07:53:11.017Z] 8551.29 IOPS, 66.81 MiB/s [2024-11-20T07:53:11.952Z] 8538.00 IOPS, 66.70 MiB/s [2024-11-20T07:53:12.889Z] 8537.67 IOPS, 66.70 MiB/s [2024-11-20T07:53:12.890Z] 8539.70 IOPS, 66.72 MiB/s 00:08:56.849 Latency(us) 00:08:56.849 [2024-11-20T07:53:12.890Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:56.849 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:56.849 Verification LBA range: start 0x0 length 0x1000 00:08:56.849 Nvme1n1 : 10.01 8540.91 66.73 0.00 0.00 14943.15 1168.25 22453.20 00:08:56.849 [2024-11-20T07:53:12.890Z] =================================================================================================================== 00:08:56.849 [2024-11-20T07:53:12.890Z] Total : 8540.91 66.73 0.00 0.00 14943.15 1168.25 22453.20 00:08:56.849 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@34 -- # perfpid=2227258 00:08:56.849 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@36 -- # xtrace_disable 00:08:56.849 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:56.849 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:56.849 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@32 -- # gen_nvmf_target_json 00:08:56.849 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:08:56.849 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:08:56.849 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:08:56.849 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:08:56.849 { 00:08:56.849 "params": { 00:08:56.849 "name": "Nvme$subsystem", 00:08:56.849 "trtype": "$TEST_TRANSPORT", 00:08:56.849 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:56.849 "adrfam": "ipv4", 00:08:56.849 "trsvcid": "$NVMF_PORT", 00:08:56.849 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:56.849 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:56.849 "hdgst": ${hdgst:-false}, 00:08:56.849 "ddgst": ${ddgst:-false} 00:08:56.849 }, 00:08:56.849 "method": "bdev_nvme_attach_controller" 00:08:56.849 } 00:08:56.849 EOF 00:08:56.849 )") 00:08:56.849 [2024-11-20 08:53:12.768255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.849 [2024-11-20 08:53:12.768288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.849 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:08:56.849 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:08:56.849 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:08:56.849 08:53:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:08:56.849 "params": { 00:08:56.849 "name": "Nvme1", 00:08:56.849 "trtype": "tcp", 00:08:56.849 "traddr": "10.0.0.2", 00:08:56.849 "adrfam": "ipv4", 00:08:56.849 "trsvcid": "4420", 00:08:56.849 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:56.849 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:56.849 "hdgst": false, 00:08:56.849 "ddgst": false 00:08:56.849 }, 00:08:56.849 "method": "bdev_nvme_attach_controller" 00:08:56.849 }' 00:08:56.849 [2024-11-20 08:53:12.780253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.849 [2024-11-20 08:53:12.780266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.849 [2024-11-20 08:53:12.792289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.849 [2024-11-20 08:53:12.792299] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.849 [2024-11-20 08:53:12.804320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.849 [2024-11-20 08:53:12.804330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.849 [2024-11-20 08:53:12.808078] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:08:56.849 [2024-11-20 08:53:12.808119] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2227258 ] 00:08:56.849 [2024-11-20 08:53:12.816355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.849 [2024-11-20 08:53:12.816365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.849 [2024-11-20 08:53:12.828387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.849 [2024-11-20 08:53:12.828398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.849 [2024-11-20 08:53:12.840420] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.849 [2024-11-20 08:53:12.840430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.849 [2024-11-20 08:53:12.852462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.849 [2024-11-20 08:53:12.852475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.849 [2024-11-20 08:53:12.864489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.849 [2024-11-20 08:53:12.864499] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.849 [2024-11-20 08:53:12.876517] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:56.849 [2024-11-20 08:53:12.876527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:56.849 [2024-11-20 08:53:12.884373] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:57.108 [2024-11-20 08:53:12.888548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.108 [2024-11-20 08:53:12.888558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.108 [2024-11-20 08:53:12.900580] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.108 [2024-11-20 08:53:12.900593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.108 [2024-11-20 08:53:12.912610] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.108 [2024-11-20 08:53:12.912619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.108 [2024-11-20 08:53:12.924645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.108 [2024-11-20 08:53:12.924658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.108 [2024-11-20 08:53:12.926848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:57.108 [2024-11-20 08:53:12.936680] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.108 [2024-11-20 08:53:12.936693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.108 [2024-11-20 08:53:12.948719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.108 [2024-11-20 08:53:12.948740] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.108 [2024-11-20 08:53:12.960745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.108 [2024-11-20 08:53:12.960759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.108 [2024-11-20 08:53:12.972775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.108 [2024-11-20 08:53:12.972788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.108 [2024-11-20 08:53:12.984806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.108 [2024-11-20 08:53:12.984818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.108 [2024-11-20 08:53:12.996836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.108 [2024-11-20 08:53:12.996854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.108 [2024-11-20 08:53:13.008869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.108 [2024-11-20 08:53:13.008881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.108 [2024-11-20 08:53:13.020912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.108 [2024-11-20 08:53:13.020931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.108 [2024-11-20 08:53:13.032945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.108 [2024-11-20 08:53:13.032965] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.108 [2024-11-20 08:53:13.044979] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.108 [2024-11-20 08:53:13.044994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.108 [2024-11-20 08:53:13.057028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.108 [2024-11-20 08:53:13.057044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.108 [2024-11-20 08:53:13.069032] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.108 [2024-11-20 08:53:13.069041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.108 [2024-11-20 08:53:13.081073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.108 [2024-11-20 08:53:13.081091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.108 Running I/O for 5 seconds... 00:08:57.108 [2024-11-20 08:53:13.096347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.108 [2024-11-20 08:53:13.096367] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.108 [2024-11-20 08:53:13.110686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.108 [2024-11-20 08:53:13.110712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.108 [2024-11-20 08:53:13.124729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.108 [2024-11-20 08:53:13.124748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.108 [2024-11-20 08:53:13.135722] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.108 [2024-11-20 08:53:13.135741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.369 [2024-11-20 08:53:13.150604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.369 [2024-11-20 08:53:13.150623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.369 [2024-11-20 08:53:13.166302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.369 [2024-11-20 08:53:13.166320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.369 [2024-11-20 08:53:13.180572] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.369 [2024-11-20 08:53:13.180592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.369 [2024-11-20 08:53:13.194921] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.369 [2024-11-20 08:53:13.194941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.369 [2024-11-20 08:53:13.206384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.369 [2024-11-20 08:53:13.206403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.369 [2024-11-20 08:53:13.221188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.369 [2024-11-20 08:53:13.221218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.369 [2024-11-20 08:53:13.232589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.369 [2024-11-20 08:53:13.232608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.369 [2024-11-20 08:53:13.247412] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.369 [2024-11-20 08:53:13.247431] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.369 [2024-11-20 08:53:13.258776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.369 [2024-11-20 08:53:13.258795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.369 [2024-11-20 08:53:13.273304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.369 [2024-11-20 08:53:13.273323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.369 [2024-11-20 08:53:13.287594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.369 [2024-11-20 08:53:13.287613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.369 [2024-11-20 08:53:13.298518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.369 [2024-11-20 08:53:13.298537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.369 [2024-11-20 08:53:13.313288] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.369 [2024-11-20 08:53:13.313307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.369 [2024-11-20 08:53:13.324492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.369 [2024-11-20 08:53:13.324511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.369 [2024-11-20 08:53:13.339426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.369 [2024-11-20 08:53:13.339445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.369 [2024-11-20 08:53:13.349903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.369 [2024-11-20 08:53:13.349922] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.369 [2024-11-20 08:53:13.364734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.369 [2024-11-20 08:53:13.364753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.369 [2024-11-20 08:53:13.375341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.369 [2024-11-20 08:53:13.375359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.369 [2024-11-20 08:53:13.389531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.369 [2024-11-20 08:53:13.389549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.369 [2024-11-20 08:53:13.403555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.369 [2024-11-20 08:53:13.403574] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.629 [2024-11-20 08:53:13.417872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.629 [2024-11-20 08:53:13.417892] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.629 [2024-11-20 08:53:13.428551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.629 [2024-11-20 08:53:13.428570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.629 [2024-11-20 08:53:13.443249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.629 [2024-11-20 08:53:13.443268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.629 [2024-11-20 08:53:13.458621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.629 [2024-11-20 08:53:13.458640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.629 [2024-11-20 08:53:13.473047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.629 [2024-11-20 08:53:13.473066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.629 [2024-11-20 08:53:13.487255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.629 [2024-11-20 08:53:13.487276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.629 [2024-11-20 08:53:13.501222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.629 [2024-11-20 08:53:13.501242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.629 [2024-11-20 08:53:13.515341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.629 [2024-11-20 08:53:13.515361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.629 [2024-11-20 08:53:13.529539] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.629 [2024-11-20 08:53:13.529558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.629 [2024-11-20 08:53:13.543330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.629 [2024-11-20 08:53:13.543351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.629 [2024-11-20 08:53:13.557276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.629 [2024-11-20 08:53:13.557297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.629 [2024-11-20 08:53:13.571630] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.629 [2024-11-20 08:53:13.571650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.629 [2024-11-20 08:53:13.587587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.629 [2024-11-20 08:53:13.587607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.629 [2024-11-20 08:53:13.602074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.629 [2024-11-20 08:53:13.602094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.629 [2024-11-20 08:53:13.612863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.629 [2024-11-20 08:53:13.612882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.629 [2024-11-20 08:53:13.627408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.629 [2024-11-20 08:53:13.627433] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.629 [2024-11-20 08:53:13.642001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.629 [2024-11-20 08:53:13.642019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.630 [2024-11-20 08:53:13.657480] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.630 [2024-11-20 08:53:13.657500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.889 [2024-11-20 08:53:13.671972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.889 [2024-11-20 08:53:13.671992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.889 [2024-11-20 08:53:13.686175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.889 [2024-11-20 08:53:13.686194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.889 [2024-11-20 08:53:13.700481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.889 [2024-11-20 08:53:13.700502] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.889 [2024-11-20 08:53:13.714970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.889 [2024-11-20 08:53:13.714990] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.889 [2024-11-20 08:53:13.730338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.889 [2024-11-20 08:53:13.730358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.889 [2024-11-20 08:53:13.744691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.889 [2024-11-20 08:53:13.744711] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.889 [2024-11-20 08:53:13.758719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.889 [2024-11-20 08:53:13.758748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.889 [2024-11-20 08:53:13.773237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.889 [2024-11-20 08:53:13.773257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.889 [2024-11-20 08:53:13.787136] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.889 [2024-11-20 08:53:13.787155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.889 [2024-11-20 08:53:13.801408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.889 [2024-11-20 08:53:13.801428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.889 [2024-11-20 08:53:13.815646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.889 [2024-11-20 08:53:13.815665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.889 [2024-11-20 08:53:13.830953] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.889 [2024-11-20 08:53:13.830974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.889 [2024-11-20 08:53:13.845424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.889 [2024-11-20 08:53:13.845445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.889 [2024-11-20 08:53:13.859305] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.889 [2024-11-20 08:53:13.859324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.889 [2024-11-20 08:53:13.873186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.889 [2024-11-20 08:53:13.873205] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.889 [2024-11-20 08:53:13.887392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.889 [2024-11-20 08:53:13.887412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.889 [2024-11-20 08:53:13.901473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.889 [2024-11-20 08:53:13.901491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:57.889 [2024-11-20 08:53:13.915633] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:57.889 [2024-11-20 08:53:13.915652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.148 [2024-11-20 08:53:13.930028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.148 [2024-11-20 08:53:13.930048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.148 [2024-11-20 08:53:13.944140] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.148 [2024-11-20 08:53:13.944159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.148 [2024-11-20 08:53:13.958178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.148 [2024-11-20 08:53:13.958197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.148 [2024-11-20 08:53:13.972396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.148 [2024-11-20 08:53:13.972415] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.148 [2024-11-20 08:53:13.983306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.148 [2024-11-20 08:53:13.983324] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.148 [2024-11-20 08:53:13.997606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.148 [2024-11-20 08:53:13.997625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.149 [2024-11-20 08:53:14.011752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.149 [2024-11-20 08:53:14.011772] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.149 [2024-11-20 08:53:14.025824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.149 [2024-11-20 08:53:14.025847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.149 [2024-11-20 08:53:14.040300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.149 [2024-11-20 08:53:14.040318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.149 [2024-11-20 08:53:14.055727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.149 [2024-11-20 08:53:14.055746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.149 [2024-11-20 08:53:14.069726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.149 [2024-11-20 08:53:14.069745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.149 [2024-11-20 08:53:14.084145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.149 [2024-11-20 08:53:14.084164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.149 16392.00 IOPS, 128.06 MiB/s [2024-11-20T07:53:14.190Z] [2024-11-20 08:53:14.098002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.149 [2024-11-20 08:53:14.098021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.149 [2024-11-20 08:53:14.111988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.149 [2024-11-20 08:53:14.112007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.149 [2024-11-20 08:53:14.126042] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.149 [2024-11-20 08:53:14.126061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.149 [2024-11-20 08:53:14.140261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.149 [2024-11-20 08:53:14.140279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.149 [2024-11-20 08:53:14.151853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.149 [2024-11-20 08:53:14.151871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.149 [2024-11-20 08:53:14.166417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.149 [2024-11-20 08:53:14.166436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.149 [2024-11-20 08:53:14.180550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.149 [2024-11-20 08:53:14.180569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.408 [2024-11-20 08:53:14.191268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.408 [2024-11-20 08:53:14.191288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.408 [2024-11-20 08:53:14.200805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.408 [2024-11-20 08:53:14.200823] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.408 [2024-11-20 08:53:14.215519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.408 [2024-11-20 08:53:14.215543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.408 [2024-11-20 08:53:14.226325] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.408 [2024-11-20 08:53:14.226343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.408 [2024-11-20 08:53:14.240727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.408 [2024-11-20 08:53:14.240746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.408 [2024-11-20 08:53:14.254785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.408 [2024-11-20 08:53:14.254804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.408 [2024-11-20 08:53:14.265853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.408 [2024-11-20 08:53:14.265873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.408 [2024-11-20 08:53:14.280594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.408 [2024-11-20 08:53:14.280618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.408 [2024-11-20 08:53:14.291836] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.408 [2024-11-20 08:53:14.291855] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.408 [2024-11-20 08:53:14.306338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.408 [2024-11-20 08:53:14.306357] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.408 [2024-11-20 08:53:14.320672] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.408 [2024-11-20 08:53:14.320690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.408 [2024-11-20 08:53:14.335863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.408 [2024-11-20 08:53:14.335882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.408 [2024-11-20 08:53:14.350260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.408 [2024-11-20 08:53:14.350280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.408 [2024-11-20 08:53:14.364232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.408 [2024-11-20 08:53:14.364255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.408 [2024-11-20 08:53:14.378302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.408 [2024-11-20 08:53:14.378321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.408 [2024-11-20 08:53:14.389002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.408 [2024-11-20 08:53:14.389020] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.408 [2024-11-20 08:53:14.403255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.408 [2024-11-20 08:53:14.403274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.408 [2024-11-20 08:53:14.417078] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.408 [2024-11-20 08:53:14.417096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.408 [2024-11-20 08:53:14.431590] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.408 [2024-11-20 08:53:14.431608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.667 [2024-11-20 08:53:14.447631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.667 [2024-11-20 08:53:14.447651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.667 [2024-11-20 08:53:14.461601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.667 [2024-11-20 08:53:14.461620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.667 [2024-11-20 08:53:14.475827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.667 [2024-11-20 08:53:14.475845] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.667 [2024-11-20 08:53:14.490364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.667 [2024-11-20 08:53:14.490383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.667 [2024-11-20 08:53:14.501790] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.667 [2024-11-20 08:53:14.501809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.667 [2024-11-20 08:53:14.516004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.667 [2024-11-20 08:53:14.516023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.667 [2024-11-20 08:53:14.529980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.667 [2024-11-20 08:53:14.529999] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.667 [2024-11-20 08:53:14.544423] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.667 [2024-11-20 08:53:14.544442] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.667 [2024-11-20 08:53:14.555445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.667 [2024-11-20 08:53:14.555464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.667 [2024-11-20 08:53:14.570620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.667 [2024-11-20 08:53:14.570639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.667 [2024-11-20 08:53:14.586027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.667 [2024-11-20 08:53:14.586047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.667 [2024-11-20 08:53:14.600251] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.667 [2024-11-20 08:53:14.600270] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.667 [2024-11-20 08:53:14.614152] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.667 [2024-11-20 08:53:14.614171] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.667 [2024-11-20 08:53:14.628485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.667 [2024-11-20 08:53:14.628503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.667 [2024-11-20 08:53:14.642803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.667 [2024-11-20 08:53:14.642822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.667 [2024-11-20 08:53:14.657157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.667 [2024-11-20 08:53:14.657176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.667 [2024-11-20 08:53:14.671200] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.667 [2024-11-20 08:53:14.671219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.667 [2024-11-20 08:53:14.685424] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.667 [2024-11-20 08:53:14.685443] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.667 [2024-11-20 08:53:14.699596] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.667 [2024-11-20 08:53:14.699614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.926 [2024-11-20 08:53:14.710851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.926 [2024-11-20 08:53:14.710869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.926 [2024-11-20 08:53:14.725669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.926 [2024-11-20 08:53:14.725688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.926 [2024-11-20 08:53:14.736781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.926 [2024-11-20 08:53:14.736800] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.926 [2024-11-20 08:53:14.750987] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.926 [2024-11-20 08:53:14.751007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.926 [2024-11-20 08:53:14.765255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.926 [2024-11-20 08:53:14.765276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.926 [2024-11-20 08:53:14.779378] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.926 [2024-11-20 08:53:14.779398] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.926 [2024-11-20 08:53:14.790073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.926 [2024-11-20 08:53:14.790092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.926 [2024-11-20 08:53:14.804676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.926 [2024-11-20 08:53:14.804695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.926 [2024-11-20 08:53:14.818221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.926 [2024-11-20 08:53:14.818240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.926 [2024-11-20 08:53:14.832761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.926 [2024-11-20 08:53:14.832780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.926 [2024-11-20 08:53:14.844116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.926 [2024-11-20 08:53:14.844135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.926 [2024-11-20 08:53:14.858646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.926 [2024-11-20 08:53:14.858665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.926 [2024-11-20 08:53:14.872691] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.926 [2024-11-20 08:53:14.872712] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.926 [2024-11-20 08:53:14.884130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.926 [2024-11-20 08:53:14.884150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.926 [2024-11-20 08:53:14.899094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.926 [2024-11-20 08:53:14.899114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.926 [2024-11-20 08:53:14.914238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.926 [2024-11-20 08:53:14.914259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.926 [2024-11-20 08:53:14.928665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.926 [2024-11-20 08:53:14.928686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.926 [2024-11-20 08:53:14.942659] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.926 [2024-11-20 08:53:14.942679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:58.926 [2024-11-20 08:53:14.956899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:58.926 [2024-11-20 08:53:14.956919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.184 [2024-11-20 08:53:14.971190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.185 [2024-11-20 08:53:14.971210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.185 [2024-11-20 08:53:14.984552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.185 [2024-11-20 08:53:14.984572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.185 [2024-11-20 08:53:14.998909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.185 [2024-11-20 08:53:14.998928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.185 [2024-11-20 08:53:15.012945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.185 [2024-11-20 08:53:15.012969] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.185 [2024-11-20 08:53:15.027100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.185 [2024-11-20 08:53:15.027119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.185 [2024-11-20 08:53:15.040976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.185 [2024-11-20 08:53:15.040996] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.185 [2024-11-20 08:53:15.055556] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.185 [2024-11-20 08:53:15.055575] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.185 [2024-11-20 08:53:15.066766] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.185 [2024-11-20 08:53:15.066785] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.185 [2024-11-20 08:53:15.081262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.185 [2024-11-20 08:53:15.081282] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.185 16441.00 IOPS, 128.45 MiB/s [2024-11-20T07:53:15.226Z] [2024-11-20 08:53:15.092656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.185 [2024-11-20 08:53:15.092676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.185 [2024-11-20 08:53:15.107153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.185 [2024-11-20 08:53:15.107173] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.185 [2024-11-20 08:53:15.121262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.185 [2024-11-20 08:53:15.121281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.185 [2024-11-20 08:53:15.131868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.185 [2024-11-20 08:53:15.131888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.185 [2024-11-20 08:53:15.146855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.185 [2024-11-20 08:53:15.146874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.185 [2024-11-20 08:53:15.160566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.185 [2024-11-20 08:53:15.160586] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.185 [2024-11-20 08:53:15.170221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.185 [2024-11-20 08:53:15.170241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.185 [2024-11-20 08:53:15.179963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.185 [2024-11-20 08:53:15.179982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.185 [2024-11-20 08:53:15.194577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.185 [2024-11-20 08:53:15.194596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.185 [2024-11-20 08:53:15.208415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.185 [2024-11-20 08:53:15.208434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.185 [2024-11-20 08:53:15.222963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.185 [2024-11-20 08:53:15.222982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.444 [2024-11-20 08:53:15.236944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.444 [2024-11-20 08:53:15.236972] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.444 [2024-11-20 08:53:15.251675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.444 [2024-11-20 08:53:15.251694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.444 [2024-11-20 08:53:15.266984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.444 [2024-11-20 08:53:15.267005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.444 [2024-11-20 08:53:15.281192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.444 [2024-11-20 08:53:15.281212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.444 [2024-11-20 08:53:15.295119] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.444 [2024-11-20 08:53:15.295138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.444 [2024-11-20 08:53:15.309989] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.444 [2024-11-20 08:53:15.310013] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.444 [2024-11-20 08:53:15.325240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.444 [2024-11-20 08:53:15.325260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.444 [2024-11-20 08:53:15.339270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.444 [2024-11-20 08:53:15.339289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.444 [2024-11-20 08:53:15.348315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.444 [2024-11-20 08:53:15.348334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.444 [2024-11-20 08:53:15.363188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.444 [2024-11-20 08:53:15.363207] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.444 [2024-11-20 08:53:15.374086] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.444 [2024-11-20 08:53:15.374105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.444 [2024-11-20 08:53:15.388523] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.444 [2024-11-20 08:53:15.388542] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.444 [2024-11-20 08:53:15.402591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.444 [2024-11-20 08:53:15.402610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.444 [2024-11-20 08:53:15.416880] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.444 [2024-11-20 08:53:15.416899] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.444 [2024-11-20 08:53:15.427688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.444 [2024-11-20 08:53:15.427707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.444 [2024-11-20 08:53:15.442030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.444 [2024-11-20 08:53:15.442050] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.444 [2024-11-20 08:53:15.455607] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.444 [2024-11-20 08:53:15.455626] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.444 [2024-11-20 08:53:15.470097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.444 [2024-11-20 08:53:15.470116] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.702 [2024-11-20 08:53:15.485810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.703 [2024-11-20 08:53:15.485830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.703 [2024-11-20 08:53:15.495376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.703 [2024-11-20 08:53:15.495394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.703 [2024-11-20 08:53:15.509508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.703 [2024-11-20 08:53:15.509527] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.703 [2024-11-20 08:53:15.523578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.703 [2024-11-20 08:53:15.523598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.703 [2024-11-20 08:53:15.537506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.703 [2024-11-20 08:53:15.537525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.703 [2024-11-20 08:53:15.551717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.703 [2024-11-20 08:53:15.551737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.703 [2024-11-20 08:53:15.566022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.703 [2024-11-20 08:53:15.566045] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.703 [2024-11-20 08:53:15.580175] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.703 [2024-11-20 08:53:15.580194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.703 [2024-11-20 08:53:15.593988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.703 [2024-11-20 08:53:15.594007] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.703 [2024-11-20 08:53:15.607923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.703 [2024-11-20 08:53:15.607942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.703 [2024-11-20 08:53:15.621856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.703 [2024-11-20 08:53:15.621875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.703 [2024-11-20 08:53:15.636026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.703 [2024-11-20 08:53:15.636044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.703 [2024-11-20 08:53:15.650022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.703 [2024-11-20 08:53:15.650041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.703 [2024-11-20 08:53:15.664196] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.703 [2024-11-20 08:53:15.664216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.703 [2024-11-20 08:53:15.675285] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.703 [2024-11-20 08:53:15.675304] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.703 [2024-11-20 08:53:15.689984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.703 [2024-11-20 08:53:15.690003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.703 [2024-11-20 08:53:15.704332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.703 [2024-11-20 08:53:15.704351] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.703 [2024-11-20 08:53:15.715270] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.703 [2024-11-20 08:53:15.715288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.703 [2024-11-20 08:53:15.729462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.703 [2024-11-20 08:53:15.729481] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.962 [2024-11-20 08:53:15.743161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.962 [2024-11-20 08:53:15.743181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.962 [2024-11-20 08:53:15.757320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.962 [2024-11-20 08:53:15.757339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.962 [2024-11-20 08:53:15.771296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.962 [2024-11-20 08:53:15.771315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.962 [2024-11-20 08:53:15.785643] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.962 [2024-11-20 08:53:15.785662] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.962 [2024-11-20 08:53:15.797030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.962 [2024-11-20 08:53:15.797051] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.962 [2024-11-20 08:53:15.811170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.962 [2024-11-20 08:53:15.811188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.962 [2024-11-20 08:53:15.825366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.962 [2024-11-20 08:53:15.825390] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.962 [2024-11-20 08:53:15.839363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.962 [2024-11-20 08:53:15.839382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.962 [2024-11-20 08:53:15.853475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.962 [2024-11-20 08:53:15.853494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.962 [2024-11-20 08:53:15.867679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.962 [2024-11-20 08:53:15.867698] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.962 [2024-11-20 08:53:15.878300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.962 [2024-11-20 08:53:15.878318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.962 [2024-11-20 08:53:15.892886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.962 [2024-11-20 08:53:15.892904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.962 [2024-11-20 08:53:15.907318] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.962 [2024-11-20 08:53:15.907337] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.962 [2024-11-20 08:53:15.918434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.962 [2024-11-20 08:53:15.918453] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.962 [2024-11-20 08:53:15.933294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.962 [2024-11-20 08:53:15.933313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.962 [2024-11-20 08:53:15.948984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.962 [2024-11-20 08:53:15.949003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.962 [2024-11-20 08:53:15.963573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.962 [2024-11-20 08:53:15.963592] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.962 [2024-11-20 08:53:15.978826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.962 [2024-11-20 08:53:15.978846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:59.962 [2024-11-20 08:53:15.992886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:59.962 [2024-11-20 08:53:15.992905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.221 [2024-11-20 08:53:16.006937] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.221 [2024-11-20 08:53:16.006963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.221 [2024-11-20 08:53:16.021020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.221 [2024-11-20 08:53:16.021040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.221 [2024-11-20 08:53:16.035236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.221 [2024-11-20 08:53:16.035255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.221 [2024-11-20 08:53:16.049080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.221 [2024-11-20 08:53:16.049099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.221 [2024-11-20 08:53:16.063205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.221 [2024-11-20 08:53:16.063224] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.221 [2024-11-20 08:53:16.076835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.221 [2024-11-20 08:53:16.076854] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.221 [2024-11-20 08:53:16.091001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.221 [2024-11-20 08:53:16.091024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.221 16463.33 IOPS, 128.62 MiB/s [2024-11-20T07:53:16.262Z] [2024-11-20 08:53:16.105444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.221 [2024-11-20 08:53:16.105463] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.221 [2024-11-20 08:53:16.116446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.221 [2024-11-20 08:53:16.116465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.221 [2024-11-20 08:53:16.130812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.221 [2024-11-20 08:53:16.130830] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.221 [2024-11-20 08:53:16.144967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.221 [2024-11-20 08:53:16.144986] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.221 [2024-11-20 08:53:16.159122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.221 [2024-11-20 08:53:16.159141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.222 [2024-11-20 08:53:16.173428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.222 [2024-11-20 08:53:16.173447] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.222 [2024-11-20 08:53:16.187430] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.222 [2024-11-20 08:53:16.187449] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.222 [2024-11-20 08:53:16.201499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.222 [2024-11-20 08:53:16.201517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.222 [2024-11-20 08:53:16.217130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.222 [2024-11-20 08:53:16.217149] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.222 [2024-11-20 08:53:16.231453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.222 [2024-11-20 08:53:16.231472] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.222 [2024-11-20 08:53:16.245783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.222 [2024-11-20 08:53:16.245801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.222 [2024-11-20 08:53:16.254835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.222 [2024-11-20 08:53:16.254853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.480 [2024-11-20 08:53:16.269564] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.480 [2024-11-20 08:53:16.269584] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.480 [2024-11-20 08:53:16.283886] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.480 [2024-11-20 08:53:16.283906] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.481 [2024-11-20 08:53:16.294446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.481 [2024-11-20 08:53:16.294465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.481 [2024-11-20 08:53:16.309312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.481 [2024-11-20 08:53:16.309332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.481 [2024-11-20 08:53:16.320272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.481 [2024-11-20 08:53:16.320291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.481 [2024-11-20 08:53:16.334942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.481 [2024-11-20 08:53:16.334966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.481 [2024-11-20 08:53:16.348524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.481 [2024-11-20 08:53:16.348544] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.481 [2024-11-20 08:53:16.363071] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.481 [2024-11-20 08:53:16.363091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.481 [2024-11-20 08:53:16.377044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.481 [2024-11-20 08:53:16.377064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.481 [2024-11-20 08:53:16.391422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.481 [2024-11-20 08:53:16.391441] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.481 [2024-11-20 08:53:16.405593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.481 [2024-11-20 08:53:16.405613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.481 [2024-11-20 08:53:16.416591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.481 [2024-11-20 08:53:16.416610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.481 [2024-11-20 08:53:16.431594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.481 [2024-11-20 08:53:16.431614] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.481 [2024-11-20 08:53:16.442588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.481 [2024-11-20 08:53:16.442607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.481 [2024-11-20 08:53:16.457484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.481 [2024-11-20 08:53:16.457503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.481 [2024-11-20 08:53:16.468380] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.481 [2024-11-20 08:53:16.468399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.481 [2024-11-20 08:53:16.483306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.481 [2024-11-20 08:53:16.483325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.481 [2024-11-20 08:53:16.499154] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.481 [2024-11-20 08:53:16.499174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.481 [2024-11-20 08:53:16.513487] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.481 [2024-11-20 08:53:16.513505] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.739 [2024-11-20 08:53:16.527349] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.739 [2024-11-20 08:53:16.527368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.739 [2024-11-20 08:53:16.541489] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.739 [2024-11-20 08:53:16.541509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.739 [2024-11-20 08:53:16.555455] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.739 [2024-11-20 08:53:16.555474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.739 [2024-11-20 08:53:16.569689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.739 [2024-11-20 08:53:16.569710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.739 [2024-11-20 08:53:16.583459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.739 [2024-11-20 08:53:16.583479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.739 [2024-11-20 08:53:16.597788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.739 [2024-11-20 08:53:16.597807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.739 [2024-11-20 08:53:16.611796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.739 [2024-11-20 08:53:16.611817] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.739 [2024-11-20 08:53:16.626415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.739 [2024-11-20 08:53:16.626435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.739 [2024-11-20 08:53:16.641891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.739 [2024-11-20 08:53:16.641912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.739 [2024-11-20 08:53:16.656274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.739 [2024-11-20 08:53:16.656294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.739 [2024-11-20 08:53:16.670683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.739 [2024-11-20 08:53:16.670704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.739 [2024-11-20 08:53:16.682217] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.739 [2024-11-20 08:53:16.682237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.739 [2024-11-20 08:53:16.691761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.739 [2024-11-20 08:53:16.691781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.739 [2024-11-20 08:53:16.706537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.739 [2024-11-20 08:53:16.706557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.739 [2024-11-20 08:53:16.720558] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.739 [2024-11-20 08:53:16.720577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.739 [2024-11-20 08:53:16.731018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.739 [2024-11-20 08:53:16.731036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.739 [2024-11-20 08:53:16.740527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.739 [2024-11-20 08:53:16.740546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.739 [2024-11-20 08:53:16.749883] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.739 [2024-11-20 08:53:16.749901] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.739 [2024-11-20 08:53:16.764437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.739 [2024-11-20 08:53:16.764456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.739 [2024-11-20 08:53:16.778204] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.739 [2024-11-20 08:53:16.778222] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.998 [2024-11-20 08:53:16.792579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.998 [2024-11-20 08:53:16.792597] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.998 [2024-11-20 08:53:16.806522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.998 [2024-11-20 08:53:16.806543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.998 [2024-11-20 08:53:16.820598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.998 [2024-11-20 08:53:16.820618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.998 [2024-11-20 08:53:16.834701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.998 [2024-11-20 08:53:16.834721] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.998 [2024-11-20 08:53:16.848853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.998 [2024-11-20 08:53:16.848873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.998 [2024-11-20 08:53:16.863053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.998 [2024-11-20 08:53:16.863073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.998 [2024-11-20 08:53:16.877257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.998 [2024-11-20 08:53:16.877277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.998 [2024-11-20 08:53:16.891191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.998 [2024-11-20 08:53:16.891210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.998 [2024-11-20 08:53:16.905414] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.998 [2024-11-20 08:53:16.905432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.998 [2024-11-20 08:53:16.919374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.998 [2024-11-20 08:53:16.919392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.998 [2024-11-20 08:53:16.933858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.998 [2024-11-20 08:53:16.933877] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.998 [2024-11-20 08:53:16.948185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.998 [2024-11-20 08:53:16.948203] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.998 [2024-11-20 08:53:16.963302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.998 [2024-11-20 08:53:16.963321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.998 [2024-11-20 08:53:16.978112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.998 [2024-11-20 08:53:16.978130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.998 [2024-11-20 08:53:16.993320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.998 [2024-11-20 08:53:16.993339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.998 [2024-11-20 08:53:17.003026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.998 [2024-11-20 08:53:17.003044] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.998 [2024-11-20 08:53:17.017797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.998 [2024-11-20 08:53:17.017816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:00.998 [2024-11-20 08:53:17.028922] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:00.998 [2024-11-20 08:53:17.028941] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.258 [2024-11-20 08:53:17.043862] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.258 [2024-11-20 08:53:17.043881] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.258 [2024-11-20 08:53:17.059650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.258 [2024-11-20 08:53:17.059670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.258 [2024-11-20 08:53:17.074228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.258 [2024-11-20 08:53:17.074251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.258 [2024-11-20 08:53:17.088821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.258 [2024-11-20 08:53:17.088839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.258 16453.00 IOPS, 128.54 MiB/s [2024-11-20T07:53:17.299Z] [2024-11-20 08:53:17.104355] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.258 [2024-11-20 08:53:17.104374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.258 [2024-11-20 08:53:17.118525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.258 [2024-11-20 08:53:17.118549] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.258 [2024-11-20 08:53:17.132740] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.258 [2024-11-20 08:53:17.132759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.258 [2024-11-20 08:53:17.146965] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.258 [2024-11-20 08:53:17.147000] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.258 [2024-11-20 08:53:17.161410] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.258 [2024-11-20 08:53:17.161429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.258 [2024-11-20 08:53:17.177581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.258 [2024-11-20 08:53:17.177600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.258 [2024-11-20 08:53:17.191839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.258 [2024-11-20 08:53:17.191857] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.258 [2024-11-20 08:53:17.205997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.258 [2024-11-20 08:53:17.206016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.258 [2024-11-20 08:53:17.220130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.258 [2024-11-20 08:53:17.220148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.258 [2024-11-20 08:53:17.234134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.258 [2024-11-20 08:53:17.234153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.258 [2024-11-20 08:53:17.243384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.258 [2024-11-20 08:53:17.243403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.258 [2024-11-20 08:53:17.257899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.258 [2024-11-20 08:53:17.257918] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.258 [2024-11-20 08:53:17.271661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.258 [2024-11-20 08:53:17.271680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.258 [2024-11-20 08:53:17.282157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.258 [2024-11-20 08:53:17.282176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.258 [2024-11-20 08:53:17.291627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.258 [2024-11-20 08:53:17.291646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.517 [2024-11-20 08:53:17.306492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.517 [2024-11-20 08:53:17.306511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.517 [2024-11-20 08:53:17.317405] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.517 [2024-11-20 08:53:17.317424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.517 [2024-11-20 08:53:17.332103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.517 [2024-11-20 08:53:17.332122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.517 [2024-11-20 08:53:17.346190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.517 [2024-11-20 08:53:17.346209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.517 [2024-11-20 08:53:17.360676] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.517 [2024-11-20 08:53:17.360695] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.517 [2024-11-20 08:53:17.374784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.517 [2024-11-20 08:53:17.374808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.517 [2024-11-20 08:53:17.388984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.517 [2024-11-20 08:53:17.389002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.517 [2024-11-20 08:53:17.399791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.517 [2024-11-20 08:53:17.399808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.517 [2024-11-20 08:53:17.414865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.517 [2024-11-20 08:53:17.414885] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.517 [2024-11-20 08:53:17.426022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.517 [2024-11-20 08:53:17.426041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.517 [2024-11-20 08:53:17.440516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.517 [2024-11-20 08:53:17.440534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.517 [2024-11-20 08:53:17.455112] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.517 [2024-11-20 08:53:17.455130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.517 [2024-11-20 08:53:17.470581] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.517 [2024-11-20 08:53:17.470600] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.517 [2024-11-20 08:53:17.484807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.517 [2024-11-20 08:53:17.484826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.517 [2024-11-20 08:53:17.498906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.517 [2024-11-20 08:53:17.498925] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.517 [2024-11-20 08:53:17.512625] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.517 [2024-11-20 08:53:17.512644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.517 [2024-11-20 08:53:17.526332] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.517 [2024-11-20 08:53:17.526350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.517 [2024-11-20 08:53:17.540316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.517 [2024-11-20 08:53:17.540335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.517 [2024-11-20 08:53:17.554777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.517 [2024-11-20 08:53:17.554797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.776 [2024-11-20 08:53:17.566520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.776 [2024-11-20 08:53:17.566539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.776 [2024-11-20 08:53:17.581532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.776 [2024-11-20 08:53:17.581551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.776 [2024-11-20 08:53:17.596816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.776 [2024-11-20 08:53:17.596835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.776 [2024-11-20 08:53:17.611239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.776 [2024-11-20 08:53:17.611257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.776 [2024-11-20 08:53:17.625130] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.776 [2024-11-20 08:53:17.625148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.776 [2024-11-20 08:53:17.638925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.776 [2024-11-20 08:53:17.638958] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.776 [2024-11-20 08:53:17.653127] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.776 [2024-11-20 08:53:17.653146] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.777 [2024-11-20 08:53:17.667007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.777 [2024-11-20 08:53:17.667027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.777 [2024-11-20 08:53:17.681683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.777 [2024-11-20 08:53:17.681704] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.777 [2024-11-20 08:53:17.693025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.777 [2024-11-20 08:53:17.693046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.777 [2024-11-20 08:53:17.707601] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.777 [2024-11-20 08:53:17.707621] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.777 [2024-11-20 08:53:17.721559] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.777 [2024-11-20 08:53:17.721579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.777 [2024-11-20 08:53:17.735817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.777 [2024-11-20 08:53:17.735836] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.777 [2024-11-20 08:53:17.751275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.777 [2024-11-20 08:53:17.751296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.777 [2024-11-20 08:53:17.765641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.777 [2024-11-20 08:53:17.765660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.777 [2024-11-20 08:53:17.775304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.777 [2024-11-20 08:53:17.775323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.777 [2024-11-20 08:53:17.789629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.777 [2024-11-20 08:53:17.789649] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:01.777 [2024-11-20 08:53:17.803524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:01.777 [2024-11-20 08:53:17.803543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.036 [2024-11-20 08:53:17.817792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.036 [2024-11-20 08:53:17.817813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.036 [2024-11-20 08:53:17.831926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.036 [2024-11-20 08:53:17.831946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.036 [2024-11-20 08:53:17.845848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.036 [2024-11-20 08:53:17.845870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.036 [2024-11-20 08:53:17.860548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.036 [2024-11-20 08:53:17.860568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.036 [2024-11-20 08:53:17.871755] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.036 [2024-11-20 08:53:17.871774] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.036 [2024-11-20 08:53:17.886624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.036 [2024-11-20 08:53:17.886646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.036 [2024-11-20 08:53:17.897865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.036 [2024-11-20 08:53:17.897886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.036 [2024-11-20 08:53:17.912310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.036 [2024-11-20 08:53:17.912330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.036 [2024-11-20 08:53:17.926257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.036 [2024-11-20 08:53:17.926276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.036 [2024-11-20 08:53:17.940274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.036 [2024-11-20 08:53:17.940294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.036 [2024-11-20 08:53:17.954490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.036 [2024-11-20 08:53:17.954510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.036 [2024-11-20 08:53:17.968645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.036 [2024-11-20 08:53:17.968665] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.036 [2024-11-20 08:53:17.982475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.036 [2024-11-20 08:53:17.982495] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.036 [2024-11-20 08:53:17.996750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.036 [2024-11-20 08:53:17.996770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.036 [2024-11-20 08:53:18.007715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.036 [2024-11-20 08:53:18.007735] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.036 [2024-11-20 08:53:18.022604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.036 [2024-11-20 08:53:18.022625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.036 [2024-11-20 08:53:18.033156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.036 [2024-11-20 08:53:18.033176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.036 [2024-11-20 08:53:18.042828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.036 [2024-11-20 08:53:18.042847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.036 [2024-11-20 08:53:18.052198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.036 [2024-11-20 08:53:18.052217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.036 [2024-11-20 08:53:18.067234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.036 [2024-11-20 08:53:18.067252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.295 [2024-11-20 08:53:18.083458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.295 [2024-11-20 08:53:18.083479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.295 [2024-11-20 08:53:18.094329] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.295 [2024-11-20 08:53:18.094350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.295 16457.00 IOPS, 128.57 MiB/s [2024-11-20T07:53:18.336Z] [2024-11-20 08:53:18.106153] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.295 [2024-11-20 08:53:18.106172] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.295 00:09:02.295 Latency(us) 00:09:02.295 [2024-11-20T07:53:18.336Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:02.295 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:02.295 Nvme1n1 : 5.01 16462.14 128.61 0.00 0.00 7767.40 3447.76 16298.52 00:09:02.295 [2024-11-20T07:53:18.336Z] =================================================================================================================== 00:09:02.295 [2024-11-20T07:53:18.336Z] Total : 16462.14 128.61 0.00 0.00 7767.40 3447.76 16298.52 00:09:02.295 [2024-11-20 08:53:18.117074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.295 [2024-11-20 08:53:18.117090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.295 [2024-11-20 08:53:18.129104] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.295 [2024-11-20 08:53:18.129119] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.295 [2024-11-20 08:53:18.141145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.295 [2024-11-20 08:53:18.141163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.295 [2024-11-20 08:53:18.153171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.295 [2024-11-20 08:53:18.153187] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.295 [2024-11-20 08:53:18.165216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.295 [2024-11-20 08:53:18.165233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.295 [2024-11-20 08:53:18.177245] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.295 [2024-11-20 08:53:18.177259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.295 [2024-11-20 08:53:18.189265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.295 [2024-11-20 08:53:18.189278] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.295 [2024-11-20 08:53:18.201297] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.295 [2024-11-20 08:53:18.201310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.295 [2024-11-20 08:53:18.213328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.295 [2024-11-20 08:53:18.213340] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.295 [2024-11-20 08:53:18.225359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.295 [2024-11-20 08:53:18.225369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.295 [2024-11-20 08:53:18.237393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.295 [2024-11-20 08:53:18.237405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.295 [2024-11-20 08:53:18.249422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.296 [2024-11-20 08:53:18.249432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.296 [2024-11-20 08:53:18.261459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:02.296 [2024-11-20 08:53:18.261470] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:02.296 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 37: kill: (2227258) - No such process 00:09:02.296 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@44 -- # wait 2227258 00:09:02.296 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@47 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:02.296 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.296 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:02.296 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.296 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@48 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:02.296 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.296 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:02.296 delay0 00:09:02.296 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.296 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:02.296 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:02.296 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:02.296 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:02.296 08:53:18 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:02.554 [2024-11-20 08:53:18.457132] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:09:09.115 [2024-11-20 08:53:24.512557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x217ca80 is same with the state(6) to be set 00:09:09.115 Initializing NVMe Controllers 00:09:09.115 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:09.115 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:09.115 Initialization complete. Launching workers. 00:09:09.115 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 116 00:09:09.115 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 403, failed to submit 33 00:09:09.115 success 230, unsuccessful 173, failed 0 00:09:09.115 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:09:09.115 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@55 -- # nvmftestfini 00:09:09.115 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # nvmfcleanup 00:09:09.115 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@99 -- # sync 00:09:09.115 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:09:09.115 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # set +e 00:09:09.115 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # for i in {1..20} 00:09:09.115 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:09:09.115 rmmod nvme_tcp 00:09:09.115 rmmod nvme_fabrics 00:09:09.115 rmmod nvme_keyring 00:09:09.115 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:09:09.115 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # set -e 00:09:09.115 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # return 0 00:09:09.115 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # '[' -n 2225394 ']' 00:09:09.115 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@337 -- # killprocess 2225394 00:09:09.115 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2225394 ']' 00:09:09.115 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2225394 00:09:09.115 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:09:09.115 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:09.115 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2225394 00:09:09.115 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:09:09.115 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:09:09.115 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2225394' 00:09:09.115 killing process with pid 2225394 00:09:09.115 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2225394 00:09:09.115 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2225394 00:09:09.115 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:09:09.115 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # nvmf_fini 00:09:09.115 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@264 -- # local dev 00:09:09.115 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@267 -- # remove_target_ns 00:09:09.115 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:09.115 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:09.115 08:53:24 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:11.021 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@268 -- # delete_main_bridge 00:09:11.021 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:09:11.021 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@130 -- # return 0 00:09:11.021 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:09:11.021 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:09:11.021 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:09:11.021 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:09:11.021 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:09:11.021 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:09:11.021 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:09:11.021 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:09:11.021 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:09:11.021 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:09:11.021 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:09:11.021 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:09:11.021 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:09:11.021 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:09:11.021 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:09:11.021 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:09:11.021 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:09:11.021 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@41 -- # _dev=0 00:09:11.021 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@41 -- # dev_map=() 00:09:11.021 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@284 -- # iptr 00:09:11.021 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@542 -- # iptables-save 00:09:11.022 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:09:11.022 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@542 -- # iptables-restore 00:09:11.022 00:09:11.022 real 0m31.487s 00:09:11.022 user 0m41.887s 00:09:11.022 sys 0m11.131s 00:09:11.022 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.022 08:53:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:11.022 ************************************ 00:09:11.022 END TEST nvmf_zcopy 00:09:11.022 ************************************ 00:09:11.022 08:53:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:09:11.022 00:09:11.022 real 4m29.251s 00:09:11.022 user 10m19.653s 00:09:11.022 sys 1m34.017s 00:09:11.022 08:53:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:11.022 08:53:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:11.022 ************************************ 00:09:11.022 END TEST nvmf_target_core 00:09:11.022 ************************************ 00:09:11.022 08:53:26 nvmf_tcp -- nvmf/nvmf.sh@11 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:11.022 08:53:26 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:11.022 08:53:26 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.022 08:53:26 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:11.022 ************************************ 00:09:11.022 START TEST nvmf_target_extra 00:09:11.022 ************************************ 00:09:11.022 08:53:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:11.281 * Looking for test storage... 00:09:11.282 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:11.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.282 --rc genhtml_branch_coverage=1 00:09:11.282 --rc genhtml_function_coverage=1 00:09:11.282 --rc genhtml_legend=1 00:09:11.282 --rc geninfo_all_blocks=1 00:09:11.282 --rc geninfo_unexecuted_blocks=1 00:09:11.282 00:09:11.282 ' 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:11.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.282 --rc genhtml_branch_coverage=1 00:09:11.282 --rc genhtml_function_coverage=1 00:09:11.282 --rc genhtml_legend=1 00:09:11.282 --rc geninfo_all_blocks=1 00:09:11.282 --rc geninfo_unexecuted_blocks=1 00:09:11.282 00:09:11.282 ' 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:11.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.282 --rc genhtml_branch_coverage=1 00:09:11.282 --rc genhtml_function_coverage=1 00:09:11.282 --rc genhtml_legend=1 00:09:11.282 --rc geninfo_all_blocks=1 00:09:11.282 --rc geninfo_unexecuted_blocks=1 00:09:11.282 00:09:11.282 ' 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:11.282 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.282 --rc genhtml_branch_coverage=1 00:09:11.282 --rc genhtml_function_coverage=1 00:09:11.282 --rc genhtml_legend=1 00:09:11.282 --rc geninfo_all_blocks=1 00:09:11.282 --rc geninfo_unexecuted_blocks=1 00:09:11.282 00:09:11.282 ' 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@50 -- # : 0 00:09:11.282 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:11.283 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:11.283 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:11.283 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.283 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.283 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:11.283 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:11.283 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:11.283 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:11.283 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:11.283 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:11.283 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:11.283 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:11.283 08:53:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:11.283 08:53:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:11.283 08:53:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:11.283 08:53:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:11.283 ************************************ 00:09:11.283 START TEST nvmf_example 00:09:11.283 ************************************ 00:09:11.283 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:11.543 * Looking for test storage... 00:09:11.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:11.543 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:11.543 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:09:11.543 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:11.543 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:11.543 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:11.543 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:11.543 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:11.543 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:09:11.543 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:09:11.543 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:11.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.544 --rc genhtml_branch_coverage=1 00:09:11.544 --rc genhtml_function_coverage=1 00:09:11.544 --rc genhtml_legend=1 00:09:11.544 --rc geninfo_all_blocks=1 00:09:11.544 --rc geninfo_unexecuted_blocks=1 00:09:11.544 00:09:11.544 ' 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:11.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.544 --rc genhtml_branch_coverage=1 00:09:11.544 --rc genhtml_function_coverage=1 00:09:11.544 --rc genhtml_legend=1 00:09:11.544 --rc geninfo_all_blocks=1 00:09:11.544 --rc geninfo_unexecuted_blocks=1 00:09:11.544 00:09:11.544 ' 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:11.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.544 --rc genhtml_branch_coverage=1 00:09:11.544 --rc genhtml_function_coverage=1 00:09:11.544 --rc genhtml_legend=1 00:09:11.544 --rc geninfo_all_blocks=1 00:09:11.544 --rc geninfo_unexecuted_blocks=1 00:09:11.544 00:09:11.544 ' 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:11.544 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:11.544 --rc genhtml_branch_coverage=1 00:09:11.544 --rc genhtml_function_coverage=1 00:09:11.544 --rc genhtml_legend=1 00:09:11.544 --rc geninfo_all_blocks=1 00:09:11.544 --rc geninfo_unexecuted_blocks=1 00:09:11.544 00:09:11.544 ' 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:11.544 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@50 -- # : 0 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:11.545 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # prepare_net_devs 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # local -g is_hw=no 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # remove_target_ns 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # xtrace_disable 00:09:11.545 08:53:27 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@131 -- # pci_devs=() 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@131 -- # local -a pci_devs 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@132 -- # pci_net_devs=() 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@133 -- # pci_drivers=() 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@133 -- # local -A pci_drivers 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@135 -- # net_devs=() 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@135 -- # local -ga net_devs 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@136 -- # e810=() 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@136 -- # local -ga e810 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@137 -- # x722=() 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@137 -- # local -ga x722 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@138 -- # mlx=() 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@138 -- # local -ga mlx 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:18.112 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:09:18.112 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:18.112 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # [[ up == up ]] 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:18.113 Found net devices under 0000:86:00.0: cvl_0_0 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # [[ up == up ]] 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:18.113 Found net devices under 0000:86:00.1: cvl_0_1 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # is_hw=yes 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@257 -- # create_target_ns 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@27 -- # local -gA dev_map 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@28 -- # local -g _dev 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@44 -- # ips=() 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@11 -- # local val=167772161 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:09:18.113 10.0.0.1 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@11 -- # local val=167772162 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:09:18.113 10.0.0.2 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:09:18.113 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@38 -- # ping_ips 1 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # local dev=initiator0 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:09:18.114 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:18.114 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.478 ms 00:09:18.114 00:09:18.114 --- 10.0.0.1 ping statistics --- 00:09:18.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.114 rtt min/avg/max/mdev = 0.478/0.478/0.478/0.000 ms 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # get_net_dev target0 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # local dev=target0 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:09:18.114 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:18.114 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:09:18.114 00:09:18.114 --- 10.0.0.2 ping statistics --- 00:09:18.114 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:18.114 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # (( pair++ )) 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # return 0 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # local dev=initiator0 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # local dev=initiator1 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # return 1 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # dev= 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@169 -- # return 0 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # get_net_dev target0 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # local dev=target0 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:09:18.114 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # get_net_dev target1 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # local dev=target1 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # return 1 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # dev= 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@169 -- # return 0 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2232933 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2232933 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 2232933 ']' 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.115 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.115 08:53:33 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:18.681 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:18.681 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:09:18.681 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:18.681 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:18.681 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:18.681 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:18.681 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.681 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:18.681 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.681 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:18.681 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.681 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:18.681 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.681 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:18.681 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:18.681 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.681 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:18.681 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.681 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:18.681 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:18.681 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.681 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:18.681 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.681 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:18.681 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:18.681 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:18.681 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:18.681 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:18.681 08:53:34 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:30.873 Initializing NVMe Controllers 00:09:30.873 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:30.873 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:30.873 Initialization complete. Launching workers. 00:09:30.873 ======================================================== 00:09:30.873 Latency(us) 00:09:30.873 Device Information : IOPS MiB/s Average min max 00:09:30.873 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 17887.40 69.87 3578.38 708.77 16268.19 00:09:30.873 ======================================================== 00:09:30.873 Total : 17887.40 69.87 3578.38 708.77 16268.19 00:09:30.873 00:09:30.873 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:30.873 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:30.873 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # nvmfcleanup 00:09:30.873 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@99 -- # sync 00:09:30.873 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:09:30.873 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # set +e 00:09:30.873 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # for i in {1..20} 00:09:30.873 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:09:30.873 rmmod nvme_tcp 00:09:30.873 rmmod nvme_fabrics 00:09:30.873 rmmod nvme_keyring 00:09:30.873 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:09:30.873 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # set -e 00:09:30.873 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # return 0 00:09:30.873 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # '[' -n 2232933 ']' 00:09:30.873 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@337 -- # killprocess 2232933 00:09:30.873 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 2232933 ']' 00:09:30.873 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 2232933 00:09:30.873 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:09:30.873 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:30.873 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2232933 00:09:30.873 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:09:30.873 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:09:30.873 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2232933' 00:09:30.873 killing process with pid 2232933 00:09:30.873 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 2232933 00:09:30.873 08:53:44 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 2232933 00:09:30.873 nvmf threads initialize successfully 00:09:30.873 bdev subsystem init successfully 00:09:30.873 created a nvmf target service 00:09:30.873 create targets's poll groups done 00:09:30.873 all subsystems of target started 00:09:30.873 nvmf target is running 00:09:30.873 all subsystems of target stopped 00:09:30.873 destroy targets's poll groups done 00:09:30.873 destroyed the nvmf target service 00:09:30.873 bdev subsystem finish successfully 00:09:30.873 nvmf threads destroy successfully 00:09:30.873 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:09:30.873 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # nvmf_fini 00:09:30.873 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@264 -- # local dev 00:09:30.873 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@267 -- # remove_target_ns 00:09:30.873 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:30.873 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:30.873 08:53:45 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:31.450 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@268 -- # delete_main_bridge 00:09:31.450 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:09:31.450 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@130 -- # return 0 00:09:31.450 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:09:31.450 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:09:31.450 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:09:31.450 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:09:31.450 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:09:31.450 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:09:31.450 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:09:31.450 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:09:31.450 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:09:31.450 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:09:31.450 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:09:31.450 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:09:31.451 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:09:31.451 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:09:31.451 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:09:31.451 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:09:31.451 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:09:31.451 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@41 -- # _dev=0 00:09:31.451 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@41 -- # dev_map=() 00:09:31.451 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@284 -- # iptr 00:09:31.451 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@542 -- # iptables-save 00:09:31.451 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:09:31.451 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@542 -- # iptables-restore 00:09:31.451 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:31.451 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:31.451 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.451 00:09:31.451 real 0m20.032s 00:09:31.451 user 0m46.102s 00:09:31.451 sys 0m6.276s 00:09:31.451 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:31.451 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.451 ************************************ 00:09:31.451 END TEST nvmf_example 00:09:31.451 ************************************ 00:09:31.451 08:53:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:31.451 08:53:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:31.451 08:53:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:31.451 08:53:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:31.451 ************************************ 00:09:31.451 START TEST nvmf_filesystem 00:09:31.451 ************************************ 00:09:31.451 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:31.451 * Looking for test storage... 00:09:31.451 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:31.451 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:31.451 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:31.451 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:31.714 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:31.714 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:31.714 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:31.714 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:31.714 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:31.714 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:31.714 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:31.714 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:31.714 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:31.714 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:31.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.715 --rc genhtml_branch_coverage=1 00:09:31.715 --rc genhtml_function_coverage=1 00:09:31.715 --rc genhtml_legend=1 00:09:31.715 --rc geninfo_all_blocks=1 00:09:31.715 --rc geninfo_unexecuted_blocks=1 00:09:31.715 00:09:31.715 ' 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:31.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.715 --rc genhtml_branch_coverage=1 00:09:31.715 --rc genhtml_function_coverage=1 00:09:31.715 --rc genhtml_legend=1 00:09:31.715 --rc geninfo_all_blocks=1 00:09:31.715 --rc geninfo_unexecuted_blocks=1 00:09:31.715 00:09:31.715 ' 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:31.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.715 --rc genhtml_branch_coverage=1 00:09:31.715 --rc genhtml_function_coverage=1 00:09:31.715 --rc genhtml_legend=1 00:09:31.715 --rc geninfo_all_blocks=1 00:09:31.715 --rc geninfo_unexecuted_blocks=1 00:09:31.715 00:09:31.715 ' 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:31.715 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.715 --rc genhtml_branch_coverage=1 00:09:31.715 --rc genhtml_function_coverage=1 00:09:31.715 --rc genhtml_legend=1 00:09:31.715 --rc geninfo_all_blocks=1 00:09:31.715 --rc geninfo_unexecuted_blocks=1 00:09:31.715 00:09:31.715 ' 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:31.715 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:31.716 #define SPDK_CONFIG_H 00:09:31.716 #define SPDK_CONFIG_AIO_FSDEV 1 00:09:31.716 #define SPDK_CONFIG_APPS 1 00:09:31.716 #define SPDK_CONFIG_ARCH native 00:09:31.716 #undef SPDK_CONFIG_ASAN 00:09:31.716 #undef SPDK_CONFIG_AVAHI 00:09:31.716 #undef SPDK_CONFIG_CET 00:09:31.716 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:09:31.716 #define SPDK_CONFIG_COVERAGE 1 00:09:31.716 #define SPDK_CONFIG_CROSS_PREFIX 00:09:31.716 #undef SPDK_CONFIG_CRYPTO 00:09:31.716 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:31.716 #undef SPDK_CONFIG_CUSTOMOCF 00:09:31.716 #undef SPDK_CONFIG_DAOS 00:09:31.716 #define SPDK_CONFIG_DAOS_DIR 00:09:31.716 #define SPDK_CONFIG_DEBUG 1 00:09:31.716 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:31.716 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:31.716 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:31.716 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:31.716 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:31.716 #undef SPDK_CONFIG_DPDK_UADK 00:09:31.716 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:31.716 #define SPDK_CONFIG_EXAMPLES 1 00:09:31.716 #undef SPDK_CONFIG_FC 00:09:31.716 #define SPDK_CONFIG_FC_PATH 00:09:31.716 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:31.716 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:31.716 #define SPDK_CONFIG_FSDEV 1 00:09:31.716 #undef SPDK_CONFIG_FUSE 00:09:31.716 #undef SPDK_CONFIG_FUZZER 00:09:31.716 #define SPDK_CONFIG_FUZZER_LIB 00:09:31.716 #undef SPDK_CONFIG_GOLANG 00:09:31.716 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:31.716 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:31.716 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:31.716 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:31.716 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:31.716 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:31.716 #undef SPDK_CONFIG_HAVE_LZ4 00:09:31.716 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:09:31.716 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:09:31.716 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:31.716 #define SPDK_CONFIG_IDXD 1 00:09:31.716 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:31.716 #undef SPDK_CONFIG_IPSEC_MB 00:09:31.716 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:31.716 #define SPDK_CONFIG_ISAL 1 00:09:31.716 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:31.716 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:31.716 #define SPDK_CONFIG_LIBDIR 00:09:31.716 #undef SPDK_CONFIG_LTO 00:09:31.716 #define SPDK_CONFIG_MAX_LCORES 128 00:09:31.716 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:09:31.716 #define SPDK_CONFIG_NVME_CUSE 1 00:09:31.716 #undef SPDK_CONFIG_OCF 00:09:31.716 #define SPDK_CONFIG_OCF_PATH 00:09:31.716 #define SPDK_CONFIG_OPENSSL_PATH 00:09:31.716 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:31.716 #define SPDK_CONFIG_PGO_DIR 00:09:31.716 #undef SPDK_CONFIG_PGO_USE 00:09:31.716 #define SPDK_CONFIG_PREFIX /usr/local 00:09:31.716 #undef SPDK_CONFIG_RAID5F 00:09:31.716 #undef SPDK_CONFIG_RBD 00:09:31.716 #define SPDK_CONFIG_RDMA 1 00:09:31.716 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:31.716 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:31.716 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:31.716 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:31.716 #define SPDK_CONFIG_SHARED 1 00:09:31.716 #undef SPDK_CONFIG_SMA 00:09:31.716 #define SPDK_CONFIG_TESTS 1 00:09:31.716 #undef SPDK_CONFIG_TSAN 00:09:31.716 #define SPDK_CONFIG_UBLK 1 00:09:31.716 #define SPDK_CONFIG_UBSAN 1 00:09:31.716 #undef SPDK_CONFIG_UNIT_TESTS 00:09:31.716 #undef SPDK_CONFIG_URING 00:09:31.716 #define SPDK_CONFIG_URING_PATH 00:09:31.716 #undef SPDK_CONFIG_URING_ZNS 00:09:31.716 #undef SPDK_CONFIG_USDT 00:09:31.716 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:31.716 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:31.716 #define SPDK_CONFIG_VFIO_USER 1 00:09:31.716 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:31.716 #define SPDK_CONFIG_VHOST 1 00:09:31.716 #define SPDK_CONFIG_VIRTIO 1 00:09:31.716 #undef SPDK_CONFIG_VTUNE 00:09:31.716 #define SPDK_CONFIG_VTUNE_DIR 00:09:31.716 #define SPDK_CONFIG_WERROR 1 00:09:31.716 #define SPDK_CONFIG_WPDK_DIR 00:09:31.716 #undef SPDK_CONFIG_XNVME 00:09:31.716 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.716 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:31.717 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:09:31.718 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 2235276 ]] 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 2235276 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.YNGUPS 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.YNGUPS/tests/target /tmp/spdk.YNGUPS 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189159510016 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963961344 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6804451328 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:31.719 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97975123968 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6856704 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169748992 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192793088 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981468672 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981980672 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=512000 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:09:31.720 * Looking for test storage... 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189159510016 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9019043840 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:31.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:31.720 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:31.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.980 --rc genhtml_branch_coverage=1 00:09:31.980 --rc genhtml_function_coverage=1 00:09:31.980 --rc genhtml_legend=1 00:09:31.980 --rc geninfo_all_blocks=1 00:09:31.980 --rc geninfo_unexecuted_blocks=1 00:09:31.980 00:09:31.980 ' 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:31.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.980 --rc genhtml_branch_coverage=1 00:09:31.980 --rc genhtml_function_coverage=1 00:09:31.980 --rc genhtml_legend=1 00:09:31.980 --rc geninfo_all_blocks=1 00:09:31.980 --rc geninfo_unexecuted_blocks=1 00:09:31.980 00:09:31.980 ' 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:31.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.980 --rc genhtml_branch_coverage=1 00:09:31.980 --rc genhtml_function_coverage=1 00:09:31.980 --rc genhtml_legend=1 00:09:31.980 --rc geninfo_all_blocks=1 00:09:31.980 --rc geninfo_unexecuted_blocks=1 00:09:31.980 00:09:31.980 ' 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:31.980 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:31.980 --rc genhtml_branch_coverage=1 00:09:31.980 --rc genhtml_function_coverage=1 00:09:31.980 --rc genhtml_legend=1 00:09:31.980 --rc geninfo_all_blocks=1 00:09:31.980 --rc geninfo_unexecuted_blocks=1 00:09:31.980 00:09:31.980 ' 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.980 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@50 -- # : 0 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:31.981 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # remove_target_ns 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # xtrace_disable 00:09:31.981 08:53:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@131 -- # pci_devs=() 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@131 -- # local -a pci_devs 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@132 -- # pci_net_devs=() 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@133 -- # pci_drivers=() 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@133 -- # local -A pci_drivers 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@135 -- # net_devs=() 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@135 -- # local -ga net_devs 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@136 -- # e810=() 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@136 -- # local -ga e810 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@137 -- # x722=() 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@137 -- # local -ga x722 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@138 -- # mlx=() 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@138 -- # local -ga mlx 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:38.551 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:38.551 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:38.551 Found net devices under 0000:86:00.0: cvl_0_0 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:38.551 Found net devices under 0000:86:00.1: cvl_0_1 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # is_hw=yes 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@257 -- # create_target_ns 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:09:38.551 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@28 -- # local -g _dev 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@44 -- # ips=() 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@11 -- # local val=167772161 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:09:38.552 10.0.0.1 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@11 -- # local val=167772162 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:09:38.552 10.0.0.2 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@38 -- # ping_ips 1 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # local dev=initiator0 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:09:38.552 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:38.552 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.395 ms 00:09:38.552 00:09:38.552 --- 10.0.0.1 ping statistics --- 00:09:38.552 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.552 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # local dev=target0 00:09:38.552 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:09:38.553 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:38.553 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:09:38.553 00:09:38.553 --- 10.0.0.2 ping statistics --- 00:09:38.553 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:38.553 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # (( pair++ )) 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # return 0 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # local dev=initiator0 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # local dev=initiator1 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # return 1 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # dev= 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@169 -- # return 0 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # local dev=target0 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # get_net_dev target1 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # local dev=target1 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # return 1 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # dev= 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@169 -- # return 0 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:38.553 08:53:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:38.553 ************************************ 00:09:38.553 START TEST nvmf_filesystem_no_in_capsule 00:09:38.553 ************************************ 00:09:38.553 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:09:38.553 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:38.553 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:38.553 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:09:38.553 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:38.553 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:38.553 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@328 -- # nvmfpid=2238396 00:09:38.553 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@329 -- # waitforlisten 2238396 00:09:38.553 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:38.553 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2238396 ']' 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:38.554 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:38.554 [2024-11-20 08:53:54.066251] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:09:38.554 [2024-11-20 08:53:54.066295] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:38.554 [2024-11-20 08:53:54.144343] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:38.554 [2024-11-20 08:53:54.188045] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:38.554 [2024-11-20 08:53:54.188082] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:38.554 [2024-11-20 08:53:54.188089] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:38.554 [2024-11-20 08:53:54.188095] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:38.554 [2024-11-20 08:53:54.188101] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:38.554 [2024-11-20 08:53:54.189569] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:38.554 [2024-11-20 08:53:54.189675] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:38.554 [2024-11-20 08:53:54.189708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:38.554 [2024-11-20 08:53:54.189710] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:38.554 [2024-11-20 08:53:54.326671] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:38.554 Malloc1 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:38.554 [2024-11-20 08:53:54.474434] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:38.554 { 00:09:38.554 "name": "Malloc1", 00:09:38.554 "aliases": [ 00:09:38.554 "c30898eb-0a9f-4ade-88c5-5716a6cf5a0b" 00:09:38.554 ], 00:09:38.554 "product_name": "Malloc disk", 00:09:38.554 "block_size": 512, 00:09:38.554 "num_blocks": 1048576, 00:09:38.554 "uuid": "c30898eb-0a9f-4ade-88c5-5716a6cf5a0b", 00:09:38.554 "assigned_rate_limits": { 00:09:38.554 "rw_ios_per_sec": 0, 00:09:38.554 "rw_mbytes_per_sec": 0, 00:09:38.554 "r_mbytes_per_sec": 0, 00:09:38.554 "w_mbytes_per_sec": 0 00:09:38.554 }, 00:09:38.554 "claimed": true, 00:09:38.554 "claim_type": "exclusive_write", 00:09:38.554 "zoned": false, 00:09:38.554 "supported_io_types": { 00:09:38.554 "read": true, 00:09:38.554 "write": true, 00:09:38.554 "unmap": true, 00:09:38.554 "flush": true, 00:09:38.554 "reset": true, 00:09:38.554 "nvme_admin": false, 00:09:38.554 "nvme_io": false, 00:09:38.554 "nvme_io_md": false, 00:09:38.554 "write_zeroes": true, 00:09:38.554 "zcopy": true, 00:09:38.554 "get_zone_info": false, 00:09:38.554 "zone_management": false, 00:09:38.554 "zone_append": false, 00:09:38.554 "compare": false, 00:09:38.554 "compare_and_write": false, 00:09:38.554 "abort": true, 00:09:38.554 "seek_hole": false, 00:09:38.554 "seek_data": false, 00:09:38.554 "copy": true, 00:09:38.554 "nvme_iov_md": false 00:09:38.554 }, 00:09:38.554 "memory_domains": [ 00:09:38.554 { 00:09:38.554 "dma_device_id": "system", 00:09:38.554 "dma_device_type": 1 00:09:38.554 }, 00:09:38.554 { 00:09:38.554 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:38.554 "dma_device_type": 2 00:09:38.554 } 00:09:38.554 ], 00:09:38.554 "driver_specific": {} 00:09:38.554 } 00:09:38.554 ]' 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:38.554 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:38.812 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:38.812 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:38.812 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:38.812 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:38.812 08:53:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:39.742 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:39.742 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:39.742 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:39.742 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:39.742 08:53:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:42.267 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:42.267 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:42.267 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:42.267 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:42.267 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:42.267 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:42.267 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:42.267 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:42.267 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:42.267 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:42.267 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:42.267 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:42.267 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:42.267 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:42.267 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:42.267 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:42.267 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:42.267 08:53:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:42.267 08:53:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:43.198 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:43.198 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:43.198 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:43.199 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:43.199 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:43.199 ************************************ 00:09:43.199 START TEST filesystem_ext4 00:09:43.199 ************************************ 00:09:43.199 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:43.199 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:43.199 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:43.199 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:43.199 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:09:43.199 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:43.199 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:09:43.199 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:09:43.199 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:09:43.199 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:09:43.199 08:53:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:43.199 mke2fs 1.47.0 (5-Feb-2023) 00:09:43.455 Discarding device blocks: 0/522240 done 00:09:43.455 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:43.455 Filesystem UUID: d6d689ac-961e-4226-917a-4cf3c8af067b 00:09:43.455 Superblock backups stored on blocks: 00:09:43.455 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:43.455 00:09:43.455 Allocating group tables: 0/64 done 00:09:43.455 Writing inode tables: 0/64 done 00:09:43.712 Creating journal (8192 blocks): done 00:09:45.204 Writing superblocks and filesystem accounting information: 0/6450/64 done 00:09:45.204 00:09:45.204 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:09:45.204 08:54:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:51.875 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:51.875 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:51.875 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:51.875 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:51.875 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:51.875 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:51.875 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2238396 00:09:51.875 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:51.875 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:51.875 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:51.875 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:51.875 00:09:51.875 real 0m7.921s 00:09:51.875 user 0m0.030s 00:09:51.875 sys 0m0.072s 00:09:51.875 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.875 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:51.875 ************************************ 00:09:51.875 END TEST filesystem_ext4 00:09:51.875 ************************************ 00:09:51.875 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:51.875 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:51.875 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:51.875 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:51.875 ************************************ 00:09:51.875 START TEST filesystem_btrfs 00:09:51.875 ************************************ 00:09:51.875 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:51.875 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:51.875 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:51.875 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:51.875 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:09:51.875 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:51.875 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:09:51.875 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:09:51.875 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:09:51.875 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:09:51.875 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:51.875 btrfs-progs v6.8.1 00:09:51.875 See https://btrfs.readthedocs.io for more information. 00:09:51.875 00:09:51.875 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:51.875 NOTE: several default settings have changed in version 5.15, please make sure 00:09:51.875 this does not affect your deployments: 00:09:51.875 - DUP for metadata (-m dup) 00:09:51.875 - enabled no-holes (-O no-holes) 00:09:51.875 - enabled free-space-tree (-R free-space-tree) 00:09:51.875 00:09:51.875 Label: (null) 00:09:51.875 UUID: 57f00300-ae2b-4ad9-9143-d6a18735b3b5 00:09:51.875 Node size: 16384 00:09:51.875 Sector size: 4096 (CPU page size: 4096) 00:09:51.876 Filesystem size: 510.00MiB 00:09:51.876 Block group profiles: 00:09:51.876 Data: single 8.00MiB 00:09:51.876 Metadata: DUP 32.00MiB 00:09:51.876 System: DUP 8.00MiB 00:09:51.876 SSD detected: yes 00:09:51.876 Zoned device: no 00:09:51.876 Features: extref, skinny-metadata, no-holes, free-space-tree 00:09:51.876 Checksum: crc32c 00:09:51.876 Number of devices: 1 00:09:51.876 Devices: 00:09:51.876 ID SIZE PATH 00:09:51.876 1 510.00MiB /dev/nvme0n1p1 00:09:51.876 00:09:51.876 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:09:51.876 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:51.876 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:51.876 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:51.876 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:51.876 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:51.876 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:51.876 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:51.876 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2238396 00:09:51.876 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:51.876 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:51.876 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:51.876 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:51.876 00:09:51.876 real 0m0.661s 00:09:51.876 user 0m0.019s 00:09:51.876 sys 0m0.120s 00:09:51.876 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:51.876 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:51.876 ************************************ 00:09:51.876 END TEST filesystem_btrfs 00:09:51.876 ************************************ 00:09:52.146 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:52.146 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:52.146 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:52.146 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:52.146 ************************************ 00:09:52.146 START TEST filesystem_xfs 00:09:52.146 ************************************ 00:09:52.147 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:09:52.147 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:52.147 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:52.147 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:52.147 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:09:52.147 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:09:52.147 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:09:52.147 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:09:52.147 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:09:52.147 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:09:52.147 08:54:07 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:52.147 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:52.147 = sectsz=512 attr=2, projid32bit=1 00:09:52.147 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:52.147 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:52.147 data = bsize=4096 blocks=130560, imaxpct=25 00:09:52.147 = sunit=0 swidth=0 blks 00:09:52.147 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:52.147 log =internal log bsize=4096 blocks=16384, version=2 00:09:52.147 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:52.147 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:53.080 Discarding blocks...Done. 00:09:53.080 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:09:53.080 08:54:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:55.607 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:55.607 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:55.607 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:55.607 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:55.607 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:55.607 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:55.607 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2238396 00:09:55.607 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:55.607 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:55.608 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:55.608 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:55.608 00:09:55.608 real 0m3.213s 00:09:55.608 user 0m0.020s 00:09:55.608 sys 0m0.079s 00:09:55.608 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.608 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:55.608 ************************************ 00:09:55.608 END TEST filesystem_xfs 00:09:55.608 ************************************ 00:09:55.608 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:55.608 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:55.608 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:55.608 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:55.608 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:55.608 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:09:55.608 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:55.608 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:55.608 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:55.608 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:55.608 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:09:55.608 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:55.608 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:55.608 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:55.608 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:55.608 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:55.608 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2238396 00:09:55.608 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2238396 ']' 00:09:55.608 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2238396 00:09:55.608 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:09:55.608 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:55.608 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2238396 00:09:55.608 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:55.608 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:55.608 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2238396' 00:09:55.608 killing process with pid 2238396 00:09:55.608 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 2238396 00:09:55.608 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 2238396 00:09:55.868 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:55.868 00:09:55.868 real 0m17.888s 00:09:55.868 user 1m10.391s 00:09:55.868 sys 0m1.429s 00:09:55.868 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:55.868 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:55.868 ************************************ 00:09:55.868 END TEST nvmf_filesystem_no_in_capsule 00:09:55.868 ************************************ 00:09:56.127 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:56.127 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:56.127 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:56.127 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:56.127 ************************************ 00:09:56.127 START TEST nvmf_filesystem_in_capsule 00:09:56.127 ************************************ 00:09:56.127 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:09:56.127 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:56.127 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:56.127 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:09:56.127 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:56.127 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.127 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@328 -- # nvmfpid=2242132 00:09:56.127 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@329 -- # waitforlisten 2242132 00:09:56.127 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:56.127 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 2242132 ']' 00:09:56.127 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:56.127 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:56.127 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:56.127 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:56.127 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:56.127 08:54:11 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.127 [2024-11-20 08:54:12.024253] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:09:56.127 [2024-11-20 08:54:12.024296] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:56.127 [2024-11-20 08:54:12.104984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:56.127 [2024-11-20 08:54:12.142878] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:56.128 [2024-11-20 08:54:12.142917] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:56.128 [2024-11-20 08:54:12.142924] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:56.128 [2024-11-20 08:54:12.142933] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:56.128 [2024-11-20 08:54:12.142939] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:56.128 [2024-11-20 08:54:12.144516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:56.128 [2024-11-20 08:54:12.144621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:56.128 [2024-11-20 08:54:12.144713] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:56.128 [2024-11-20 08:54:12.144714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:56.386 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:56.386 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:09:56.386 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:09:56.386 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:56.386 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.386 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:56.386 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:56.386 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:56.386 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.386 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.386 [2024-11-20 08:54:12.289934] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:56.386 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.386 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:56.386 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.386 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.386 Malloc1 00:09:56.386 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.386 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:56.386 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.386 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.386 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.386 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:56.386 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.386 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.645 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.645 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:56.645 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.645 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.645 [2024-11-20 08:54:12.435438] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:56.645 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.645 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:56.645 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:09:56.645 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:09:56.645 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:09:56.645 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:09:56.645 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:56.645 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:56.645 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:56.645 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:56.645 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:09:56.645 { 00:09:56.645 "name": "Malloc1", 00:09:56.645 "aliases": [ 00:09:56.645 "25346c81-171f-4ec3-9f03-c1d2147cfc24" 00:09:56.645 ], 00:09:56.645 "product_name": "Malloc disk", 00:09:56.645 "block_size": 512, 00:09:56.645 "num_blocks": 1048576, 00:09:56.645 "uuid": "25346c81-171f-4ec3-9f03-c1d2147cfc24", 00:09:56.645 "assigned_rate_limits": { 00:09:56.645 "rw_ios_per_sec": 0, 00:09:56.645 "rw_mbytes_per_sec": 0, 00:09:56.645 "r_mbytes_per_sec": 0, 00:09:56.645 "w_mbytes_per_sec": 0 00:09:56.645 }, 00:09:56.645 "claimed": true, 00:09:56.645 "claim_type": "exclusive_write", 00:09:56.645 "zoned": false, 00:09:56.645 "supported_io_types": { 00:09:56.645 "read": true, 00:09:56.645 "write": true, 00:09:56.645 "unmap": true, 00:09:56.645 "flush": true, 00:09:56.645 "reset": true, 00:09:56.645 "nvme_admin": false, 00:09:56.645 "nvme_io": false, 00:09:56.645 "nvme_io_md": false, 00:09:56.645 "write_zeroes": true, 00:09:56.645 "zcopy": true, 00:09:56.645 "get_zone_info": false, 00:09:56.645 "zone_management": false, 00:09:56.645 "zone_append": false, 00:09:56.645 "compare": false, 00:09:56.645 "compare_and_write": false, 00:09:56.645 "abort": true, 00:09:56.645 "seek_hole": false, 00:09:56.645 "seek_data": false, 00:09:56.645 "copy": true, 00:09:56.645 "nvme_iov_md": false 00:09:56.645 }, 00:09:56.645 "memory_domains": [ 00:09:56.645 { 00:09:56.645 "dma_device_id": "system", 00:09:56.645 "dma_device_type": 1 00:09:56.645 }, 00:09:56.645 { 00:09:56.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:56.645 "dma_device_type": 2 00:09:56.645 } 00:09:56.645 ], 00:09:56.645 "driver_specific": {} 00:09:56.645 } 00:09:56.645 ]' 00:09:56.645 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:09:56.645 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:09:56.645 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:09:56.645 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:09:56.645 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:09:56.645 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:09:56.645 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:56.645 08:54:12 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:58.018 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:58.018 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:09:58.018 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:58.018 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:09:58.018 08:54:13 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:09:59.913 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:59.913 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:59.913 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:59.913 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:09:59.913 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:59.913 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:09:59.913 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:59.913 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:59.913 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:59.913 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:59.913 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:59.913 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:59.913 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:59.913 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:59.914 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:59.914 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:59.914 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:59.914 08:54:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:00.845 08:54:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:01.780 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:01.780 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:01.780 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:01.780 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:01.780 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:01.780 ************************************ 00:10:01.780 START TEST filesystem_in_capsule_ext4 00:10:01.780 ************************************ 00:10:01.780 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:01.780 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:01.780 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:01.780 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:01.780 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:01.780 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:01.780 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:01.780 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:01.780 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:01.780 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:01.780 08:54:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:01.780 mke2fs 1.47.0 (5-Feb-2023) 00:10:01.780 Discarding device blocks: 0/522240 done 00:10:01.780 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:01.780 Filesystem UUID: e33732bf-d9d5-481a-9c91-f9f1543f8ad3 00:10:01.780 Superblock backups stored on blocks: 00:10:01.780 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:01.780 00:10:01.780 Allocating group tables: 0/64 done 00:10:01.780 Writing inode tables: 0/64 done 00:10:05.058 Creating journal (8192 blocks): done 00:10:05.058 Writing superblocks and filesystem accounting information: 0/64 done 00:10:05.058 00:10:05.058 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:05.058 08:54:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:10.316 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:10.316 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:10.316 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:10.316 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:10.316 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:10.316 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:10.316 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2242132 00:10:10.316 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:10.316 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:10.316 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:10.316 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:10.316 00:10:10.316 real 0m8.317s 00:10:10.316 user 0m0.032s 00:10:10.316 sys 0m0.071s 00:10:10.316 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:10.316 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:10.316 ************************************ 00:10:10.316 END TEST filesystem_in_capsule_ext4 00:10:10.316 ************************************ 00:10:10.316 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:10.316 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:10.316 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:10.316 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:10.316 ************************************ 00:10:10.316 START TEST filesystem_in_capsule_btrfs 00:10:10.316 ************************************ 00:10:10.316 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:10.316 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:10.316 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:10.316 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:10.316 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:10.316 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:10.316 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:10.316 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:10.316 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:10.316 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:10.316 08:54:25 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:10.316 btrfs-progs v6.8.1 00:10:10.316 See https://btrfs.readthedocs.io for more information. 00:10:10.316 00:10:10.316 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:10.316 NOTE: several default settings have changed in version 5.15, please make sure 00:10:10.316 this does not affect your deployments: 00:10:10.316 - DUP for metadata (-m dup) 00:10:10.316 - enabled no-holes (-O no-holes) 00:10:10.316 - enabled free-space-tree (-R free-space-tree) 00:10:10.316 00:10:10.316 Label: (null) 00:10:10.317 UUID: eb5a6598-7a2b-4371-9a9f-57c3c03ef906 00:10:10.317 Node size: 16384 00:10:10.317 Sector size: 4096 (CPU page size: 4096) 00:10:10.317 Filesystem size: 510.00MiB 00:10:10.317 Block group profiles: 00:10:10.317 Data: single 8.00MiB 00:10:10.317 Metadata: DUP 32.00MiB 00:10:10.317 System: DUP 8.00MiB 00:10:10.317 SSD detected: yes 00:10:10.317 Zoned device: no 00:10:10.317 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:10.317 Checksum: crc32c 00:10:10.317 Number of devices: 1 00:10:10.317 Devices: 00:10:10.317 ID SIZE PATH 00:10:10.317 1 510.00MiB /dev/nvme0n1p1 00:10:10.317 00:10:10.317 08:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:10.317 08:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:11.250 08:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:11.250 08:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:11.250 08:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:11.250 08:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:11.250 08:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:11.250 08:54:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:11.250 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2242132 00:10:11.250 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:11.250 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:11.250 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:11.250 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:11.250 00:10:11.250 real 0m1.073s 00:10:11.250 user 0m0.019s 00:10:11.250 sys 0m0.121s 00:10:11.250 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:11.250 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:11.250 ************************************ 00:10:11.250 END TEST filesystem_in_capsule_btrfs 00:10:11.250 ************************************ 00:10:11.250 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:11.250 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:11.250 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:11.250 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:11.250 ************************************ 00:10:11.250 START TEST filesystem_in_capsule_xfs 00:10:11.250 ************************************ 00:10:11.250 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:11.250 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:11.250 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:11.250 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:11.250 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:11.250 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:11.250 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:11.250 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:10:11.250 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:11.250 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:11.250 08:54:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:11.250 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:11.250 = sectsz=512 attr=2, projid32bit=1 00:10:11.250 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:11.250 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:11.250 data = bsize=4096 blocks=130560, imaxpct=25 00:10:11.250 = sunit=0 swidth=0 blks 00:10:11.250 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:11.250 log =internal log bsize=4096 blocks=16384, version=2 00:10:11.250 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:11.251 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:12.183 Discarding blocks...Done. 00:10:12.183 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:12.183 08:54:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:14.082 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:14.082 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:14.082 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:14.082 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:14.082 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:14.082 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:14.082 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2242132 00:10:14.082 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:14.082 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:14.082 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:14.082 08:54:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:14.082 00:10:14.082 real 0m2.894s 00:10:14.082 user 0m0.027s 00:10:14.082 sys 0m0.070s 00:10:14.082 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.082 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:14.082 ************************************ 00:10:14.082 END TEST filesystem_in_capsule_xfs 00:10:14.082 ************************************ 00:10:14.082 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:14.082 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:14.082 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:14.341 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:14.341 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:14.341 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:14.341 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:14.341 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.341 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:14.341 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:14.341 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:14.341 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:14.341 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:14.341 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:14.341 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:14.341 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:14.341 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2242132 00:10:14.341 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 2242132 ']' 00:10:14.341 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 2242132 00:10:14.341 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:14.341 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:14.341 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2242132 00:10:14.341 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:14.341 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:14.341 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2242132' 00:10:14.341 killing process with pid 2242132 00:10:14.341 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 2242132 00:10:14.341 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 2242132 00:10:14.600 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:14.600 00:10:14.600 real 0m18.647s 00:10:14.600 user 1m13.378s 00:10:14.600 sys 0m1.454s 00:10:14.601 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:14.601 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:14.601 ************************************ 00:10:14.601 END TEST nvmf_filesystem_in_capsule 00:10:14.601 ************************************ 00:10:14.860 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:14.860 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:10:14.860 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@99 -- # sync 00:10:14.860 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:10:14.860 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # set +e 00:10:14.860 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:10:14.860 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:10:14.860 rmmod nvme_tcp 00:10:14.860 rmmod nvme_fabrics 00:10:14.860 rmmod nvme_keyring 00:10:14.860 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:10:14.861 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # set -e 00:10:14.861 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # return 0 00:10:14.861 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:10:14.861 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:10:14.861 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # nvmf_fini 00:10:14.861 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@264 -- # local dev 00:10:14.861 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@267 -- # remove_target_ns 00:10:14.861 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:14.861 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:14.861 08:54:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:16.765 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@268 -- # delete_main_bridge 00:10:16.765 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:10:16.765 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@130 -- # return 0 00:10:16.765 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:10:16.765 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:10:16.765 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:10:16.765 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:10:16.765 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:10:16.765 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:10:16.765 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:10:16.765 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:10:16.765 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:10:16.765 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:10:16.765 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:10:16.765 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:10:16.765 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:10:16.765 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:10:16.765 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:10:16.765 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:10:16.765 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:10:16.765 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@41 -- # _dev=0 00:10:16.765 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@41 -- # dev_map=() 00:10:17.024 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@284 -- # iptr 00:10:17.024 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@542 -- # iptables-save 00:10:17.024 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:10:17.024 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@542 -- # iptables-restore 00:10:17.024 00:10:17.024 real 0m45.459s 00:10:17.024 user 2m25.959s 00:10:17.024 sys 0m7.643s 00:10:17.024 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:17.024 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:17.024 ************************************ 00:10:17.024 END TEST nvmf_filesystem 00:10:17.024 ************************************ 00:10:17.024 08:54:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:17.024 08:54:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:17.024 08:54:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:17.024 08:54:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:17.024 ************************************ 00:10:17.024 START TEST nvmf_target_discovery 00:10:17.024 ************************************ 00:10:17.024 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:17.024 * Looking for test storage... 00:10:17.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:17.024 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:17.024 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:10:17.024 08:54:32 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:17.024 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:17.024 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:17.024 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:17.024 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:17.024 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:10:17.024 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:10:17.024 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:10:17.024 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:10:17.024 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:10:17.024 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:10:17.024 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:10:17.024 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:17.024 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:10:17.024 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:10:17.024 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:17.024 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:17.024 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:10:17.024 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:10:17.024 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:17.025 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:10:17.025 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:10:17.025 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:10:17.025 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:10:17.025 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:17.025 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:10:17.025 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:10:17.025 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:17.025 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:17.025 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:10:17.025 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:17.025 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:17.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.025 --rc genhtml_branch_coverage=1 00:10:17.025 --rc genhtml_function_coverage=1 00:10:17.025 --rc genhtml_legend=1 00:10:17.025 --rc geninfo_all_blocks=1 00:10:17.025 --rc geninfo_unexecuted_blocks=1 00:10:17.025 00:10:17.025 ' 00:10:17.025 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:17.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.025 --rc genhtml_branch_coverage=1 00:10:17.025 --rc genhtml_function_coverage=1 00:10:17.025 --rc genhtml_legend=1 00:10:17.025 --rc geninfo_all_blocks=1 00:10:17.025 --rc geninfo_unexecuted_blocks=1 00:10:17.025 00:10:17.025 ' 00:10:17.025 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:17.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.025 --rc genhtml_branch_coverage=1 00:10:17.025 --rc genhtml_function_coverage=1 00:10:17.025 --rc genhtml_legend=1 00:10:17.025 --rc geninfo_all_blocks=1 00:10:17.025 --rc geninfo_unexecuted_blocks=1 00:10:17.025 00:10:17.025 ' 00:10:17.025 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:17.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:17.025 --rc genhtml_branch_coverage=1 00:10:17.025 --rc genhtml_function_coverage=1 00:10:17.025 --rc genhtml_legend=1 00:10:17.025 --rc geninfo_all_blocks=1 00:10:17.025 --rc geninfo_unexecuted_blocks=1 00:10:17.025 00:10:17.025 ' 00:10:17.025 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:17.025 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:17.285 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:17.285 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:17.285 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:17.285 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:17.285 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:17.285 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:10:17.285 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:17.285 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:10:17.285 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:17.285 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:17.285 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:17.285 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:10:17.285 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:10:17.285 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:17.285 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:17.285 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:10:17.285 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:17.285 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:17.285 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@50 -- # : 0 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:10:17.286 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@54 -- # have_pci_nics=0 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # nvmftestinit 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # prepare_net_devs 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # local -g is_hw=no 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # remove_target_ns 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # xtrace_disable 00:10:17.286 08:54:33 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@131 -- # pci_devs=() 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@131 -- # local -a pci_devs 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@132 -- # pci_net_devs=() 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@133 -- # pci_drivers=() 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@133 -- # local -A pci_drivers 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@135 -- # net_devs=() 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@135 -- # local -ga net_devs 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@136 -- # e810=() 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@136 -- # local -ga e810 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@137 -- # x722=() 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@137 -- # local -ga x722 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@138 -- # mlx=() 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@138 -- # local -ga mlx 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:23.857 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:23.857 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:23.857 Found net devices under 0000:86:00.0: cvl_0_0 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:23.857 Found net devices under 0000:86:00.1: cvl_0_1 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # is_hw=yes 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@257 -- # create_target_ns 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:10:23.857 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@27 -- # local -gA dev_map 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@28 -- # local -g _dev 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@44 -- # ips=() 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@11 -- # local val=167772161 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:10:23.858 10.0.0.1 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@11 -- # local val=167772162 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:10:23.858 10.0.0.2 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:10:23.858 08:54:38 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:10:23.858 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:10:23.858 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:10:23.858 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:10:23.858 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:10:23.858 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:10:23.858 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:10:23.858 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:10:23.858 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:23.858 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:23.858 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@38 -- # ping_ips 1 00:10:23.858 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:10:23.858 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:10:23.858 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:10:23.858 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:10:23.858 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:10:23.858 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:10:23.858 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # local dev=initiator0 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:10:23.859 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:23.859 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.447 ms 00:10:23.859 00:10:23.859 --- 10.0.0.1 ping statistics --- 00:10:23.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.859 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # local dev=target0 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:10:23.859 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:23.859 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:10:23.859 00:10:23.859 --- 10.0.0.2 ping statistics --- 00:10:23.859 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:23.859 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # (( pair++ )) 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # return 0 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # local dev=initiator0 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # local dev=initiator1 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # return 1 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # dev= 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@169 -- # return 0 00:10:23.859 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # local dev=target0 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # get_net_dev target1 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # local dev=target1 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # return 1 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # dev= 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@169 -- # return 0 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@16 -- # nvmfappstart -m 0xF 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # nvmfpid=2248903 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # waitforlisten 2248903 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 2248903 ']' 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:23.860 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:23.860 [2024-11-20 08:54:39.235995] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:10:23.860 [2024-11-20 08:54:39.236048] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:23.860 [2024-11-20 08:54:39.316359] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:23.860 [2024-11-20 08:54:39.359610] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:23.860 [2024-11-20 08:54:39.359649] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:23.860 [2024-11-20 08:54:39.359656] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:23.860 [2024-11-20 08:54:39.359662] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:23.860 [2024-11-20 08:54:39.359667] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:23.860 [2024-11-20 08:54:39.361257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:23.860 [2024-11-20 08:54:39.361377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:23.860 [2024-11-20 08:54:39.361490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:23.860 [2024-11-20 08:54:39.361490] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:23.860 [2024-11-20 08:54:39.502862] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # seq 1 4 00:10:23.860 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # for i in $(seq 1 4) 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@22 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:23.861 Null1 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:23.861 [2024-11-20 08:54:39.548325] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # for i in $(seq 1 4) 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@22 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:23.861 Null2 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # for i in $(seq 1 4) 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@22 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:23.861 Null3 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # for i in $(seq 1 4) 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@22 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:23.861 Null4 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:23.861 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.862 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:23.862 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.862 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:23.862 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.862 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:23.862 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.862 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:23.862 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.862 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:10:23.862 00:10:23.862 Discovery Log Number of Records 6, Generation counter 6 00:10:23.862 =====Discovery Log Entry 0====== 00:10:23.862 trtype: tcp 00:10:23.862 adrfam: ipv4 00:10:23.862 subtype: current discovery subsystem 00:10:23.862 treq: not required 00:10:23.862 portid: 0 00:10:23.862 trsvcid: 4420 00:10:23.862 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:23.862 traddr: 10.0.0.2 00:10:23.862 eflags: explicit discovery connections, duplicate discovery information 00:10:23.862 sectype: none 00:10:23.862 =====Discovery Log Entry 1====== 00:10:23.862 trtype: tcp 00:10:23.862 adrfam: ipv4 00:10:23.862 subtype: nvme subsystem 00:10:23.862 treq: not required 00:10:23.862 portid: 0 00:10:23.862 trsvcid: 4420 00:10:23.862 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:23.862 traddr: 10.0.0.2 00:10:23.862 eflags: none 00:10:23.862 sectype: none 00:10:23.862 =====Discovery Log Entry 2====== 00:10:23.862 trtype: tcp 00:10:23.862 adrfam: ipv4 00:10:23.862 subtype: nvme subsystem 00:10:23.862 treq: not required 00:10:23.862 portid: 0 00:10:23.862 trsvcid: 4420 00:10:23.862 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:23.862 traddr: 10.0.0.2 00:10:23.862 eflags: none 00:10:23.862 sectype: none 00:10:23.862 =====Discovery Log Entry 3====== 00:10:23.862 trtype: tcp 00:10:23.862 adrfam: ipv4 00:10:23.862 subtype: nvme subsystem 00:10:23.862 treq: not required 00:10:23.862 portid: 0 00:10:23.862 trsvcid: 4420 00:10:23.862 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:23.862 traddr: 10.0.0.2 00:10:23.862 eflags: none 00:10:23.862 sectype: none 00:10:23.862 =====Discovery Log Entry 4====== 00:10:23.862 trtype: tcp 00:10:23.862 adrfam: ipv4 00:10:23.862 subtype: nvme subsystem 00:10:23.862 treq: not required 00:10:23.862 portid: 0 00:10:23.862 trsvcid: 4420 00:10:23.862 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:23.862 traddr: 10.0.0.2 00:10:23.862 eflags: none 00:10:23.862 sectype: none 00:10:23.862 =====Discovery Log Entry 5====== 00:10:23.862 trtype: tcp 00:10:23.862 adrfam: ipv4 00:10:23.862 subtype: discovery subsystem referral 00:10:23.862 treq: not required 00:10:23.862 portid: 0 00:10:23.862 trsvcid: 4430 00:10:23.862 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:23.862 traddr: 10.0.0.2 00:10:23.862 eflags: none 00:10:23.862 sectype: none 00:10:23.862 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@34 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:23.862 Perform nvmf subsystem discovery via RPC 00:10:23.862 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_get_subsystems 00:10:23.862 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.862 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:23.862 [ 00:10:23.862 { 00:10:23.862 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:23.862 "subtype": "Discovery", 00:10:23.862 "listen_addresses": [ 00:10:23.862 { 00:10:23.862 "trtype": "TCP", 00:10:23.862 "adrfam": "IPv4", 00:10:23.862 "traddr": "10.0.0.2", 00:10:23.862 "trsvcid": "4420" 00:10:23.862 } 00:10:23.862 ], 00:10:23.862 "allow_any_host": true, 00:10:23.862 "hosts": [] 00:10:23.862 }, 00:10:23.862 { 00:10:23.862 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:23.862 "subtype": "NVMe", 00:10:23.862 "listen_addresses": [ 00:10:23.862 { 00:10:23.862 "trtype": "TCP", 00:10:23.862 "adrfam": "IPv4", 00:10:23.862 "traddr": "10.0.0.2", 00:10:23.862 "trsvcid": "4420" 00:10:23.862 } 00:10:23.862 ], 00:10:23.862 "allow_any_host": true, 00:10:23.862 "hosts": [], 00:10:23.862 "serial_number": "SPDK00000000000001", 00:10:23.862 "model_number": "SPDK bdev Controller", 00:10:23.862 "max_namespaces": 32, 00:10:23.862 "min_cntlid": 1, 00:10:23.862 "max_cntlid": 65519, 00:10:23.862 "namespaces": [ 00:10:23.862 { 00:10:23.862 "nsid": 1, 00:10:23.862 "bdev_name": "Null1", 00:10:23.862 "name": "Null1", 00:10:23.862 "nguid": "2FF656C2BF3146C8B00EA5B0D4A3F2D7", 00:10:23.862 "uuid": "2ff656c2-bf31-46c8-b00e-a5b0d4a3f2d7" 00:10:23.862 } 00:10:23.862 ] 00:10:23.862 }, 00:10:23.862 { 00:10:23.862 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:23.862 "subtype": "NVMe", 00:10:23.862 "listen_addresses": [ 00:10:23.862 { 00:10:23.862 "trtype": "TCP", 00:10:23.862 "adrfam": "IPv4", 00:10:23.862 "traddr": "10.0.0.2", 00:10:23.862 "trsvcid": "4420" 00:10:23.862 } 00:10:23.862 ], 00:10:23.862 "allow_any_host": true, 00:10:23.862 "hosts": [], 00:10:23.862 "serial_number": "SPDK00000000000002", 00:10:23.862 "model_number": "SPDK bdev Controller", 00:10:23.862 "max_namespaces": 32, 00:10:23.862 "min_cntlid": 1, 00:10:23.862 "max_cntlid": 65519, 00:10:23.862 "namespaces": [ 00:10:23.862 { 00:10:23.862 "nsid": 1, 00:10:23.863 "bdev_name": "Null2", 00:10:23.863 "name": "Null2", 00:10:23.863 "nguid": "5F4766AEF34D4D1E862FA87243EAF462", 00:10:23.863 "uuid": "5f4766ae-f34d-4d1e-862f-a87243eaf462" 00:10:23.863 } 00:10:23.863 ] 00:10:23.863 }, 00:10:23.863 { 00:10:23.863 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:23.863 "subtype": "NVMe", 00:10:23.863 "listen_addresses": [ 00:10:23.863 { 00:10:23.863 "trtype": "TCP", 00:10:23.863 "adrfam": "IPv4", 00:10:23.863 "traddr": "10.0.0.2", 00:10:23.863 "trsvcid": "4420" 00:10:23.863 } 00:10:23.863 ], 00:10:23.863 "allow_any_host": true, 00:10:23.863 "hosts": [], 00:10:23.863 "serial_number": "SPDK00000000000003", 00:10:23.863 "model_number": "SPDK bdev Controller", 00:10:23.863 "max_namespaces": 32, 00:10:23.863 "min_cntlid": 1, 00:10:23.863 "max_cntlid": 65519, 00:10:23.863 "namespaces": [ 00:10:23.863 { 00:10:23.863 "nsid": 1, 00:10:23.863 "bdev_name": "Null3", 00:10:23.863 "name": "Null3", 00:10:23.863 "nguid": "02F0F0CB6E7344208C544320D2E61CAB", 00:10:23.863 "uuid": "02f0f0cb-6e73-4420-8c54-4320d2e61cab" 00:10:23.863 } 00:10:23.863 ] 00:10:23.863 }, 00:10:23.863 { 00:10:23.863 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:23.863 "subtype": "NVMe", 00:10:23.863 "listen_addresses": [ 00:10:23.863 { 00:10:23.863 "trtype": "TCP", 00:10:23.863 "adrfam": "IPv4", 00:10:23.863 "traddr": "10.0.0.2", 00:10:23.863 "trsvcid": "4420" 00:10:23.863 } 00:10:23.863 ], 00:10:23.863 "allow_any_host": true, 00:10:23.863 "hosts": [], 00:10:23.863 "serial_number": "SPDK00000000000004", 00:10:23.863 "model_number": "SPDK bdev Controller", 00:10:23.863 "max_namespaces": 32, 00:10:23.863 "min_cntlid": 1, 00:10:23.863 "max_cntlid": 65519, 00:10:23.863 "namespaces": [ 00:10:23.863 { 00:10:23.863 "nsid": 1, 00:10:23.863 "bdev_name": "Null4", 00:10:23.863 "name": "Null4", 00:10:23.863 "nguid": "A1EF2B11A83F4C18B8EF8B7A3B002B03", 00:10:23.863 "uuid": "a1ef2b11-a83f-4c18-b8ef-8b7a3b002b03" 00:10:23.863 } 00:10:23.863 ] 00:10:23.863 } 00:10:23.863 ] 00:10:23.863 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:23.863 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # seq 1 4 00:10:23.863 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # for i in $(seq 1 4) 00:10:23.863 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:23.863 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:23.863 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # rpc_cmd bdev_null_delete Null1 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # for i in $(seq 1 4) 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # rpc_cmd bdev_null_delete Null2 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # for i in $(seq 1 4) 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # rpc_cmd bdev_null_delete Null3 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # for i in $(seq 1 4) 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # rpc_cmd bdev_null_delete Null4 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_get_bdevs 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # jq -r '.[].name' 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:24.121 08:54:39 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:24.121 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # check_bdevs= 00:10:24.121 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@45 -- # '[' -n '' ']' 00:10:24.121 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:10:24.121 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@52 -- # nvmftestfini 00:10:24.121 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # nvmfcleanup 00:10:24.121 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@99 -- # sync 00:10:24.121 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:10:24.121 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # set +e 00:10:24.121 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # for i in {1..20} 00:10:24.121 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:10:24.121 rmmod nvme_tcp 00:10:24.121 rmmod nvme_fabrics 00:10:24.121 rmmod nvme_keyring 00:10:24.121 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:10:24.121 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # set -e 00:10:24.121 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # return 0 00:10:24.121 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # '[' -n 2248903 ']' 00:10:24.121 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@337 -- # killprocess 2248903 00:10:24.121 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 2248903 ']' 00:10:24.121 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 2248903 00:10:24.121 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:10:24.121 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:24.121 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2248903 00:10:24.121 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:24.121 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:24.121 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2248903' 00:10:24.121 killing process with pid 2248903 00:10:24.121 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 2248903 00:10:24.121 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 2248903 00:10:24.380 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:10:24.380 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # nvmf_fini 00:10:24.380 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@264 -- # local dev 00:10:24.380 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@267 -- # remove_target_ns 00:10:24.380 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:24.380 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:24.380 08:54:40 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:26.285 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@268 -- # delete_main_bridge 00:10:26.285 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:10:26.285 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@130 -- # return 0 00:10:26.285 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:10:26.285 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:10:26.285 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:10:26.285 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:10:26.285 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:10:26.543 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:10:26.543 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:10:26.543 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:10:26.543 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:10:26.543 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:10:26.543 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:10:26.543 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:10:26.543 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:10:26.543 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:10:26.543 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:10:26.543 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:10:26.543 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:10:26.543 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@41 -- # _dev=0 00:10:26.543 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@41 -- # dev_map=() 00:10:26.543 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@284 -- # iptr 00:10:26.543 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@542 -- # iptables-save 00:10:26.543 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:10:26.543 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@542 -- # iptables-restore 00:10:26.543 00:10:26.543 real 0m9.462s 00:10:26.543 user 0m5.621s 00:10:26.543 sys 0m4.880s 00:10:26.543 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:26.543 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:26.543 ************************************ 00:10:26.543 END TEST nvmf_target_discovery 00:10:26.543 ************************************ 00:10:26.543 08:54:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:26.543 08:54:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:26.543 08:54:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:26.543 08:54:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:26.543 ************************************ 00:10:26.543 START TEST nvmf_referrals 00:10:26.543 ************************************ 00:10:26.543 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:26.543 * Looking for test storage... 00:10:26.543 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:26.543 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:26.543 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:10:26.543 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:26.543 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:26.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.803 --rc genhtml_branch_coverage=1 00:10:26.803 --rc genhtml_function_coverage=1 00:10:26.803 --rc genhtml_legend=1 00:10:26.803 --rc geninfo_all_blocks=1 00:10:26.803 --rc geninfo_unexecuted_blocks=1 00:10:26.803 00:10:26.803 ' 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:26.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.803 --rc genhtml_branch_coverage=1 00:10:26.803 --rc genhtml_function_coverage=1 00:10:26.803 --rc genhtml_legend=1 00:10:26.803 --rc geninfo_all_blocks=1 00:10:26.803 --rc geninfo_unexecuted_blocks=1 00:10:26.803 00:10:26.803 ' 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:26.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.803 --rc genhtml_branch_coverage=1 00:10:26.803 --rc genhtml_function_coverage=1 00:10:26.803 --rc genhtml_legend=1 00:10:26.803 --rc geninfo_all_blocks=1 00:10:26.803 --rc geninfo_unexecuted_blocks=1 00:10:26.803 00:10:26.803 ' 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:26.803 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:26.803 --rc genhtml_branch_coverage=1 00:10:26.803 --rc genhtml_function_coverage=1 00:10:26.803 --rc genhtml_legend=1 00:10:26.803 --rc geninfo_all_blocks=1 00:10:26.803 --rc geninfo_unexecuted_blocks=1 00:10:26.803 00:10:26.803 ' 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:26.803 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@50 -- # : 0 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:10:26.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@54 -- # have_pci_nics=0 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # prepare_net_devs 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # local -g is_hw=no 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # remove_target_ns 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # xtrace_disable 00:10:26.804 08:54:42 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@131 -- # pci_devs=() 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@131 -- # local -a pci_devs 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@132 -- # pci_net_devs=() 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@133 -- # pci_drivers=() 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@133 -- # local -A pci_drivers 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@135 -- # net_devs=() 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@135 -- # local -ga net_devs 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@136 -- # e810=() 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@136 -- # local -ga e810 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@137 -- # x722=() 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@137 -- # local -ga x722 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@138 -- # mlx=() 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@138 -- # local -ga mlx 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:33.374 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:33.374 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:33.374 Found net devices under 0000:86:00.0: cvl_0_0 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:33.374 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:33.375 Found net devices under 0000:86:00.1: cvl_0_1 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # is_hw=yes 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@257 -- # create_target_ns 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@27 -- # local -gA dev_map 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@28 -- # local -g _dev 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@44 -- # ips=() 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@11 -- # local val=167772161 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:10:33.375 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:10:33.375 10.0.0.1 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@11 -- # local val=167772162 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:10:33.376 10.0.0.2 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@38 -- # ping_ips 1 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:10:33.376 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:10:33.377 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:10:33.377 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:10:33.377 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:10:33.377 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:33.377 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:10:33.377 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # local dev=initiator0 00:10:33.377 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:10:33.377 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:10:33.377 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:10:33.377 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:10:33.377 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:33.377 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:33.377 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:10:33.377 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:10:33.377 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:10:33.377 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:10:33.377 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:33.377 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:33.377 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:33.377 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:10:33.377 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:10:33.377 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:33.377 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.479 ms 00:10:33.377 00:10:33.377 --- 10.0.0.1 ping statistics --- 00:10:33.377 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.377 rtt min/avg/max/mdev = 0.479/0.479/0.479/0.000 ms 00:10:33.377 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:10:33.377 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:10:33.377 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:33.377 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:33.377 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:33.377 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:33.377 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:33.377 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # local dev=target0 00:10:33.377 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:33.377 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:10:33.378 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:10:33.378 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:10:33.378 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:33.378 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:33.378 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:33.378 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:33.378 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:33.378 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:10:33.378 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:10:33.378 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:10:33.378 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:10:33.378 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:10:33.378 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:33.378 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:10:33.378 00:10:33.378 --- 10.0.0.2 ping statistics --- 00:10:33.378 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:33.378 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:10:33.378 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # (( pair++ )) 00:10:33.378 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:10:33.378 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:33.378 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # return 0 00:10:33.378 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:10:33.378 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:10:33.378 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:10:33.378 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:10:33.378 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:10:33.378 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:10:33.378 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:10:33.378 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:10:33.378 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:33.378 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:10:33.378 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # local dev=initiator0 00:10:33.378 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:10:33.378 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:10:33.378 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:10:33.378 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:10:33.379 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:33.379 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:33.379 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:10:33.379 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:10:33.379 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:10:33.379 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:33.379 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:10:33.379 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:10:33.379 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:10:33.379 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:10:33.379 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:33.379 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:10:33.379 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # local dev=initiator1 00:10:33.379 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:10:33.379 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:10:33.379 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # return 1 00:10:33.379 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # dev= 00:10:33.379 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@169 -- # return 0 00:10:33.379 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:10:33.379 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:10:33.379 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:10:33.379 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:33.379 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:33.380 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:33.380 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:33.380 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:33.380 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # local dev=target0 00:10:33.380 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:33.380 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:10:33.380 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:10:33.380 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:10:33.380 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:33.380 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:33.380 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:33.380 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:33.380 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:33.380 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:33.380 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:10:33.380 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:10:33.380 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:10:33.380 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:10:33.380 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:33.380 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:33.380 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # get_net_dev target1 00:10:33.380 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # local dev=target1 00:10:33.380 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:10:33.380 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:10:33.380 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # return 1 00:10:33.380 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # dev= 00:10:33.380 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@169 -- # return 0 00:10:33.380 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:10:33.380 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:33.380 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:10:33.380 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:10:33.381 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:33.381 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:10:33.381 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:10:33.381 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:33.381 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:10:33.381 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:33.381 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:33.381 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # nvmfpid=2252700 00:10:33.381 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # waitforlisten 2252700 00:10:33.381 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:33.381 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 2252700 ']' 00:10:33.381 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:33.381 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:33.381 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:33.381 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:33.381 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:33.381 08:54:48 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:33.381 [2024-11-20 08:54:48.820766] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:10:33.381 [2024-11-20 08:54:48.820815] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:33.381 [2024-11-20 08:54:48.901111] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:33.381 [2024-11-20 08:54:48.942307] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:33.381 [2024-11-20 08:54:48.942347] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:33.381 [2024-11-20 08:54:48.942354] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:33.381 [2024-11-20 08:54:48.942360] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:33.381 [2024-11-20 08:54:48.942365] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:33.381 [2024-11-20 08:54:48.943789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:33.381 [2024-11-20 08:54:48.943898] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:33.381 [2024-11-20 08:54:48.944016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:33.381 [2024-11-20 08:54:48.944017] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:33.381 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:33.381 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:10:33.382 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:10:33.382 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:33.382 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:33.382 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:33.382 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:33.382 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.382 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:33.382 [2024-11-20 08:54:49.093484] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:33.382 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.382 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:33.382 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.382 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:33.382 [2024-11-20 08:54:49.106888] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:33.382 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.382 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:33.382 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.382 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:33.382 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.382 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:33.382 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.382 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:33.382 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.383 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:33.383 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.383 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:33.383 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.383 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:33.383 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:33.383 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.383 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:33.383 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.383 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:33.383 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:33.383 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:33.383 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:33.383 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:33.383 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.383 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:33.383 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:33.383 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.383 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:33.383 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:33.383 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:33.383 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:33.383 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:33.383 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:33.383 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:33.383 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:33.383 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:33.383 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:33.383 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:33.383 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.384 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:33.642 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.642 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:33.642 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.642 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:33.642 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.642 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:33.642 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.642 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:33.642 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.642 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:33.642 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:33.642 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.642 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:33.642 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.642 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:33.642 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:33.642 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:33.642 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:33.642 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:33.642 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:33.642 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:33.899 08:54:49 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:34.155 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:34.155 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:34.155 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:34.155 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:34.155 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:34.155 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:34.412 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:34.412 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:34.412 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.412 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:34.412 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.413 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:34.413 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:34.413 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:34.413 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:34.413 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.413 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:34.413 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:34.413 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.413 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:34.413 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:34.413 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:34.413 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:34.413 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:34.413 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:34.413 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:34.413 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:34.669 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:34.669 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:34.669 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:34.669 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:34.669 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:34.669 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:34.669 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:34.669 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:34.669 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:34.669 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:34.669 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:34.669 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:34.669 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:34.925 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:34.925 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:34.925 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.925 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:34.925 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.925 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:34.925 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:34.925 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:34.925 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:34.925 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:34.925 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:34.925 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:34.925 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:34.925 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:34.926 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:34.926 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:34.926 08:54:50 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:35.182 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:35.182 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:35.183 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:35.183 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:35.183 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # nvmfcleanup 00:10:35.183 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@99 -- # sync 00:10:35.183 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:10:35.183 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # set +e 00:10:35.183 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # for i in {1..20} 00:10:35.183 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:10:35.183 rmmod nvme_tcp 00:10:35.183 rmmod nvme_fabrics 00:10:35.183 rmmod nvme_keyring 00:10:35.183 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:10:35.183 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # set -e 00:10:35.183 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # return 0 00:10:35.183 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # '[' -n 2252700 ']' 00:10:35.183 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@337 -- # killprocess 2252700 00:10:35.183 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 2252700 ']' 00:10:35.183 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 2252700 00:10:35.183 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:10:35.183 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:35.183 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2252700 00:10:35.183 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:35.183 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:35.183 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2252700' 00:10:35.183 killing process with pid 2252700 00:10:35.183 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 2252700 00:10:35.183 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 2252700 00:10:35.441 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:10:35.441 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # nvmf_fini 00:10:35.442 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@264 -- # local dev 00:10:35.442 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@267 -- # remove_target_ns 00:10:35.442 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:35.442 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:35.442 08:54:51 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:37.348 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@268 -- # delete_main_bridge 00:10:37.348 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:10:37.348 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@130 -- # return 0 00:10:37.348 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:10:37.348 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:10:37.348 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:10:37.348 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:10:37.348 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:10:37.348 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:10:37.348 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:10:37.348 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:10:37.348 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:10:37.348 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:10:37.348 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:10:37.348 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:10:37.348 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:10:37.348 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:10:37.348 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:10:37.348 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:10:37.348 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:10:37.348 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@41 -- # _dev=0 00:10:37.348 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@41 -- # dev_map=() 00:10:37.348 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@284 -- # iptr 00:10:37.348 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@542 -- # iptables-save 00:10:37.348 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:10:37.348 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@542 -- # iptables-restore 00:10:37.608 00:10:37.608 real 0m10.973s 00:10:37.608 user 0m12.133s 00:10:37.608 sys 0m5.331s 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:37.608 ************************************ 00:10:37.608 END TEST nvmf_referrals 00:10:37.608 ************************************ 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:37.608 ************************************ 00:10:37.608 START TEST nvmf_connect_disconnect 00:10:37.608 ************************************ 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:37.608 * Looking for test storage... 00:10:37.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:37.608 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:37.868 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:10:37.868 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:37.868 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:37.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.868 --rc genhtml_branch_coverage=1 00:10:37.868 --rc genhtml_function_coverage=1 00:10:37.868 --rc genhtml_legend=1 00:10:37.868 --rc geninfo_all_blocks=1 00:10:37.868 --rc geninfo_unexecuted_blocks=1 00:10:37.868 00:10:37.868 ' 00:10:37.868 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:37.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.868 --rc genhtml_branch_coverage=1 00:10:37.868 --rc genhtml_function_coverage=1 00:10:37.868 --rc genhtml_legend=1 00:10:37.868 --rc geninfo_all_blocks=1 00:10:37.868 --rc geninfo_unexecuted_blocks=1 00:10:37.868 00:10:37.868 ' 00:10:37.868 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:37.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.868 --rc genhtml_branch_coverage=1 00:10:37.868 --rc genhtml_function_coverage=1 00:10:37.868 --rc genhtml_legend=1 00:10:37.869 --rc geninfo_all_blocks=1 00:10:37.869 --rc geninfo_unexecuted_blocks=1 00:10:37.869 00:10:37.869 ' 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:37.869 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:37.869 --rc genhtml_branch_coverage=1 00:10:37.869 --rc genhtml_function_coverage=1 00:10:37.869 --rc genhtml_legend=1 00:10:37.869 --rc geninfo_all_blocks=1 00:10:37.869 --rc geninfo_unexecuted_blocks=1 00:10:37.869 00:10:37.869 ' 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@50 -- # : 0 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:10:37.869 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # have_pci_nics=0 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # prepare_net_devs 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # local -g is_hw=no 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # remove_target_ns 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # xtrace_disable 00:10:37.869 08:54:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@131 -- # pci_devs=() 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@131 -- # local -a pci_devs 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@132 -- # pci_net_devs=() 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@133 -- # pci_drivers=() 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@133 -- # local -A pci_drivers 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@135 -- # net_devs=() 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@135 -- # local -ga net_devs 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@136 -- # e810=() 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@136 -- # local -ga e810 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@137 -- # x722=() 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@137 -- # local -ga x722 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@138 -- # mlx=() 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@138 -- # local -ga mlx 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:44.444 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:44.444 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.444 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:44.445 Found net devices under 0000:86:00.0: cvl_0_0 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:44.445 Found net devices under 0000:86:00.1: cvl_0_1 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # is_hw=yes 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@257 -- # create_target_ns 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@27 -- # local -gA dev_map 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@28 -- # local -g _dev 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@44 -- # ips=() 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@11 -- # local val=167772161 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:10:44.445 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:10:44.445 10.0.0.1 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@11 -- # local val=167772162 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:10:44.446 10.0.0.2 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@38 -- # ping_ips 1 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # local dev=initiator0 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:10:44.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:44.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.406 ms 00:10:44.446 00:10:44.446 --- 10.0.0.1 ping statistics --- 00:10:44.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.446 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:44.446 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # local dev=target0 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:10:44.447 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:44.447 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:10:44.447 00:10:44.447 --- 10.0.0.2 ping statistics --- 00:10:44.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:44.447 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # (( pair++ )) 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # return 0 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # local dev=initiator0 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # local dev=initiator1 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # return 1 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # dev= 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@169 -- # return 0 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # local dev=target0 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:44.447 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target1 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # local dev=target1 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # return 1 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # dev= 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@169 -- # return 0 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # nvmfpid=2256800 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # waitforlisten 2256800 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 2256800 ']' 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:44.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:44.448 08:54:59 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:44.448 [2024-11-20 08:54:59.839042] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:10:44.448 [2024-11-20 08:54:59.839087] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:44.448 [2024-11-20 08:54:59.917523] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:44.448 [2024-11-20 08:54:59.959748] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:44.448 [2024-11-20 08:54:59.959788] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:44.448 [2024-11-20 08:54:59.959795] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:44.448 [2024-11-20 08:54:59.959802] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:44.448 [2024-11-20 08:54:59.959806] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:44.448 [2024-11-20 08:54:59.961281] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:44.448 [2024-11-20 08:54:59.961390] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:44.448 [2024-11-20 08:54:59.961499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:44.448 [2024-11-20 08:54:59.961500] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:44.448 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:44.448 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:10:44.448 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:10:44.448 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:44.448 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:44.448 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:44.448 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:44.448 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.448 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:44.448 [2024-11-20 08:55:00.103447] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:44.448 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.448 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:44.448 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.448 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:44.448 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.448 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:44.448 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:44.448 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.448 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:44.448 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.448 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:44.448 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.448 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:44.448 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.448 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:44.448 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:44.448 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:44.448 [2024-11-20 08:55:00.171188] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:44.449 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:44.449 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:44.449 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:44.449 08:55:00 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:47.724 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:51.019 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:54.384 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.669 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.946 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.946 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:00.946 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:00.946 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # nvmfcleanup 00:11:00.946 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@99 -- # sync 00:11:00.946 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:11:00.946 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # set +e 00:11:00.946 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:00.946 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:11:00.946 rmmod nvme_tcp 00:11:00.946 rmmod nvme_fabrics 00:11:00.946 rmmod nvme_keyring 00:11:00.946 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:00.946 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # set -e 00:11:00.946 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # return 0 00:11:00.946 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # '[' -n 2256800 ']' 00:11:00.946 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@337 -- # killprocess 2256800 00:11:00.946 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2256800 ']' 00:11:00.946 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 2256800 00:11:00.946 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:00.946 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:00.946 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2256800 00:11:00.946 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:00.946 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:00.946 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2256800' 00:11:00.946 killing process with pid 2256800 00:11:00.946 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 2256800 00:11:00.946 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 2256800 00:11:00.946 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:11:00.946 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # nvmf_fini 00:11:00.946 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@264 -- # local dev 00:11:00.946 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@267 -- # remove_target_ns 00:11:00.946 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:00.946 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:00.946 08:55:16 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@268 -- # delete_main_bridge 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@130 -- # return 0 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@41 -- # _dev=0 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@41 -- # dev_map=() 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@284 -- # iptr 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@542 -- # iptables-save 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@542 -- # iptables-restore 00:11:03.481 00:11:03.481 real 0m25.455s 00:11:03.481 user 1m8.826s 00:11:03.481 sys 0m5.888s 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:03.481 ************************************ 00:11:03.481 END TEST nvmf_connect_disconnect 00:11:03.481 ************************************ 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:03.481 ************************************ 00:11:03.481 START TEST nvmf_multitarget 00:11:03.481 ************************************ 00:11:03.481 08:55:18 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:11:03.481 * Looking for test storage... 00:11:03.481 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:11:03.481 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:03.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.482 --rc genhtml_branch_coverage=1 00:11:03.482 --rc genhtml_function_coverage=1 00:11:03.482 --rc genhtml_legend=1 00:11:03.482 --rc geninfo_all_blocks=1 00:11:03.482 --rc geninfo_unexecuted_blocks=1 00:11:03.482 00:11:03.482 ' 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:03.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.482 --rc genhtml_branch_coverage=1 00:11:03.482 --rc genhtml_function_coverage=1 00:11:03.482 --rc genhtml_legend=1 00:11:03.482 --rc geninfo_all_blocks=1 00:11:03.482 --rc geninfo_unexecuted_blocks=1 00:11:03.482 00:11:03.482 ' 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:03.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.482 --rc genhtml_branch_coverage=1 00:11:03.482 --rc genhtml_function_coverage=1 00:11:03.482 --rc genhtml_legend=1 00:11:03.482 --rc geninfo_all_blocks=1 00:11:03.482 --rc geninfo_unexecuted_blocks=1 00:11:03.482 00:11:03.482 ' 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:03.482 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:03.482 --rc genhtml_branch_coverage=1 00:11:03.482 --rc genhtml_function_coverage=1 00:11:03.482 --rc genhtml_legend=1 00:11:03.482 --rc geninfo_all_blocks=1 00:11:03.482 --rc geninfo_unexecuted_blocks=1 00:11:03.482 00:11:03.482 ' 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@50 -- # : 0 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:03.482 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # prepare_net_devs 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # local -g is_hw=no 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # remove_target_ns 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # xtrace_disable 00:11:03.482 08:55:19 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@131 -- # pci_devs=() 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@131 -- # local -a pci_devs 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@132 -- # pci_net_devs=() 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@133 -- # pci_drivers=() 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@133 -- # local -A pci_drivers 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@135 -- # net_devs=() 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@135 -- # local -ga net_devs 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@136 -- # e810=() 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@136 -- # local -ga e810 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@137 -- # x722=() 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@137 -- # local -ga x722 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@138 -- # mlx=() 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@138 -- # local -ga mlx 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:10.058 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:10.058 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:10.058 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:10.059 Found net devices under 0000:86:00.0: cvl_0_0 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:10.059 Found net devices under 0000:86:00.1: cvl_0_1 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # is_hw=yes 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@257 -- # create_target_ns 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@28 -- # local -g _dev 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@44 -- # ips=() 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:11:10.059 08:55:24 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@11 -- # local val=167772161 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:11:10.059 10.0.0.1 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@11 -- # local val=167772162 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:11:10.059 10.0.0.2 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@38 -- # ping_ips 1 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:11:10.059 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # local dev=initiator0 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:11:10.060 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:10.060 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:11:10.060 00:11:10.060 --- 10.0.0.1 ping statistics --- 00:11:10.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.060 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # local dev=target0 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:11:10.060 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:10.060 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.051 ms 00:11:10.060 00:11:10.060 --- 10.0.0.2 ping statistics --- 00:11:10.060 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:10.060 rtt min/avg/max/mdev = 0.051/0.051/0.051/0.000 ms 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # (( pair++ )) 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # return 0 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # local dev=initiator0 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # local dev=initiator1 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # return 1 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # dev= 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@169 -- # return 0 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # local dev=target0 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:11:10.060 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:10.061 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:10.061 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:10.061 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:10.061 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # get_net_dev target1 00:11:10.061 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # local dev=target1 00:11:10.061 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:11:10.061 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:11:10.061 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # return 1 00:11:10.061 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # dev= 00:11:10.061 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@169 -- # return 0 00:11:10.061 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:11:10.061 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:10.061 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:11:10.061 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:11:10.061 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:10.061 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:11:10.061 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:11:10.061 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:11:10.061 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:11:10.061 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:10.061 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:10.061 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # nvmfpid=2263216 00:11:10.061 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # waitforlisten 2263216 00:11:10.061 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:10.061 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 2263216 ']' 00:11:10.061 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:10.061 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:10.061 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:10.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:10.061 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:10.061 08:55:25 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:10.061 [2024-11-20 08:55:25.389085] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:11:10.061 [2024-11-20 08:55:25.389127] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:10.061 [2024-11-20 08:55:25.469441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:10.061 [2024-11-20 08:55:25.510797] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:10.061 [2024-11-20 08:55:25.510837] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:10.061 [2024-11-20 08:55:25.510847] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:10.061 [2024-11-20 08:55:25.510852] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:10.061 [2024-11-20 08:55:25.510857] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:10.061 [2024-11-20 08:55:25.512387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:10.061 [2024-11-20 08:55:25.512492] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:10.061 [2024-11-20 08:55:25.512600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:10.061 [2024-11-20 08:55:25.512601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:10.318 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:10.319 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:11:10.319 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:11:10.319 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:10.319 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:10.319 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:10.319 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:10.319 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:10.319 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:11:10.576 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:11:10.576 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:11:10.576 "nvmf_tgt_1" 00:11:10.576 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:11:10.576 "nvmf_tgt_2" 00:11:10.576 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:10.576 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:11:10.833 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:11:10.833 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:11:10.833 true 00:11:10.833 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:11:11.091 true 00:11:11.091 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:11:11.091 08:55:26 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:11:11.091 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:11:11.091 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:11:11.091 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:11:11.091 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # nvmfcleanup 00:11:11.091 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@99 -- # sync 00:11:11.091 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:11:11.091 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # set +e 00:11:11.091 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:11.091 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:11:11.091 rmmod nvme_tcp 00:11:11.091 rmmod nvme_fabrics 00:11:11.091 rmmod nvme_keyring 00:11:11.091 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:11.091 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # set -e 00:11:11.091 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # return 0 00:11:11.091 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # '[' -n 2263216 ']' 00:11:11.091 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@337 -- # killprocess 2263216 00:11:11.091 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 2263216 ']' 00:11:11.091 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 2263216 00:11:11.091 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:11:11.091 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:11.091 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2263216 00:11:11.350 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:11.350 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:11.350 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2263216' 00:11:11.350 killing process with pid 2263216 00:11:11.350 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 2263216 00:11:11.350 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 2263216 00:11:11.350 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:11:11.350 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # nvmf_fini 00:11:11.350 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@264 -- # local dev 00:11:11.350 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@267 -- # remove_target_ns 00:11:11.350 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:11.350 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:11.350 08:55:27 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@268 -- # delete_main_bridge 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@130 -- # return 0 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@41 -- # _dev=0 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@41 -- # dev_map=() 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@284 -- # iptr 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@542 -- # iptables-save 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@542 -- # iptables-restore 00:11:13.887 00:11:13.887 real 0m10.396s 00:11:13.887 user 0m9.908s 00:11:13.887 sys 0m5.033s 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:11:13.887 ************************************ 00:11:13.887 END TEST nvmf_multitarget 00:11:13.887 ************************************ 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:13.887 ************************************ 00:11:13.887 START TEST nvmf_rpc 00:11:13.887 ************************************ 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:11:13.887 * Looking for test storage... 00:11:13.887 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:11:13.887 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:13.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.888 --rc genhtml_branch_coverage=1 00:11:13.888 --rc genhtml_function_coverage=1 00:11:13.888 --rc genhtml_legend=1 00:11:13.888 --rc geninfo_all_blocks=1 00:11:13.888 --rc geninfo_unexecuted_blocks=1 00:11:13.888 00:11:13.888 ' 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:13.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.888 --rc genhtml_branch_coverage=1 00:11:13.888 --rc genhtml_function_coverage=1 00:11:13.888 --rc genhtml_legend=1 00:11:13.888 --rc geninfo_all_blocks=1 00:11:13.888 --rc geninfo_unexecuted_blocks=1 00:11:13.888 00:11:13.888 ' 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:13.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.888 --rc genhtml_branch_coverage=1 00:11:13.888 --rc genhtml_function_coverage=1 00:11:13.888 --rc genhtml_legend=1 00:11:13.888 --rc geninfo_all_blocks=1 00:11:13.888 --rc geninfo_unexecuted_blocks=1 00:11:13.888 00:11:13.888 ' 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:13.888 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.888 --rc genhtml_branch_coverage=1 00:11:13.888 --rc genhtml_function_coverage=1 00:11:13.888 --rc genhtml_legend=1 00:11:13.888 --rc geninfo_all_blocks=1 00:11:13.888 --rc geninfo_unexecuted_blocks=1 00:11:13.888 00:11:13.888 ' 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@50 -- # : 0 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:13.888 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # prepare_net_devs 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # local -g is_hw=no 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # remove_target_ns 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # xtrace_disable 00:11:13.888 08:55:29 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@131 -- # pci_devs=() 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@131 -- # local -a pci_devs 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@132 -- # pci_net_devs=() 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@133 -- # pci_drivers=() 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@133 -- # local -A pci_drivers 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@135 -- # net_devs=() 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@135 -- # local -ga net_devs 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@136 -- # e810=() 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@136 -- # local -ga e810 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@137 -- # x722=() 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@137 -- # local -ga x722 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@138 -- # mlx=() 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@138 -- # local -ga mlx 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:20.461 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:20.461 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:20.461 Found net devices under 0000:86:00.0: cvl_0_0 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:20.461 Found net devices under 0000:86:00.1: cvl_0_1 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # is_hw=yes 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@257 -- # create_target_ns 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:11:20.461 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@28 -- # local -g _dev 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@44 -- # ips=() 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@11 -- # local val=167772161 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:11:20.462 10.0.0.1 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@11 -- # local val=167772162 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:11:20.462 10.0.0.2 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@38 -- # ping_ips 1 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # local dev=initiator0 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:20.462 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:11:20.463 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:20.463 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.367 ms 00:11:20.463 00:11:20.463 --- 10.0.0.1 ping statistics --- 00:11:20.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.463 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # local dev=target0 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:11:20.463 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:20.463 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:11:20.463 00:11:20.463 --- 10.0.0.2 ping statistics --- 00:11:20.463 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.463 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # (( pair++ )) 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # return 0 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # local dev=initiator0 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # local dev=initiator1 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # return 1 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # dev= 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@169 -- # return 0 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # local dev=target0 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:11:20.463 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:11:20.464 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:20.464 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:20.464 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:20.464 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:20.464 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # get_net_dev target1 00:11:20.464 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # local dev=target1 00:11:20.464 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:11:20.464 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:11:20.464 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # return 1 00:11:20.464 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # dev= 00:11:20.464 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@169 -- # return 0 00:11:20.464 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:11:20.464 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:20.464 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:11:20.464 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:11:20.464 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:20.464 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:11:20.464 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:11:20.464 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:11:20.464 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:11:20.464 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:20.464 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.464 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # nvmfpid=2267035 00:11:20.464 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:20.464 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # waitforlisten 2267035 00:11:20.464 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 2267035 ']' 00:11:20.464 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.464 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:20.464 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.464 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:20.464 08:55:35 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.464 [2024-11-20 08:55:35.838705] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:11:20.464 [2024-11-20 08:55:35.838750] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.464 [2024-11-20 08:55:35.920144] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:20.464 [2024-11-20 08:55:35.962904] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:20.464 [2024-11-20 08:55:35.962940] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:20.464 [2024-11-20 08:55:35.962951] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:20.464 [2024-11-20 08:55:35.962957] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:20.464 [2024-11-20 08:55:35.962963] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:20.464 [2024-11-20 08:55:35.964412] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.464 [2024-11-20 08:55:35.964520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.464 [2024-11-20 08:55:35.964629] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.464 [2024-11-20 08:55:35.964630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:20.722 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:20.722 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:20.722 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:11:20.722 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:20.722 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.722 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:20.722 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:11:20.722 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.722 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.722 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.722 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:11:20.722 "tick_rate": 2300000000, 00:11:20.722 "poll_groups": [ 00:11:20.722 { 00:11:20.722 "name": "nvmf_tgt_poll_group_000", 00:11:20.722 "admin_qpairs": 0, 00:11:20.722 "io_qpairs": 0, 00:11:20.722 "current_admin_qpairs": 0, 00:11:20.722 "current_io_qpairs": 0, 00:11:20.722 "pending_bdev_io": 0, 00:11:20.722 "completed_nvme_io": 0, 00:11:20.722 "transports": [] 00:11:20.722 }, 00:11:20.722 { 00:11:20.722 "name": "nvmf_tgt_poll_group_001", 00:11:20.722 "admin_qpairs": 0, 00:11:20.722 "io_qpairs": 0, 00:11:20.722 "current_admin_qpairs": 0, 00:11:20.722 "current_io_qpairs": 0, 00:11:20.722 "pending_bdev_io": 0, 00:11:20.722 "completed_nvme_io": 0, 00:11:20.722 "transports": [] 00:11:20.722 }, 00:11:20.722 { 00:11:20.722 "name": "nvmf_tgt_poll_group_002", 00:11:20.722 "admin_qpairs": 0, 00:11:20.722 "io_qpairs": 0, 00:11:20.722 "current_admin_qpairs": 0, 00:11:20.722 "current_io_qpairs": 0, 00:11:20.722 "pending_bdev_io": 0, 00:11:20.722 "completed_nvme_io": 0, 00:11:20.722 "transports": [] 00:11:20.722 }, 00:11:20.722 { 00:11:20.722 "name": "nvmf_tgt_poll_group_003", 00:11:20.722 "admin_qpairs": 0, 00:11:20.722 "io_qpairs": 0, 00:11:20.722 "current_admin_qpairs": 0, 00:11:20.722 "current_io_qpairs": 0, 00:11:20.722 "pending_bdev_io": 0, 00:11:20.722 "completed_nvme_io": 0, 00:11:20.722 "transports": [] 00:11:20.722 } 00:11:20.722 ] 00:11:20.722 }' 00:11:20.722 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:11:20.722 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:11:20.722 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:11:20.722 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:11:20.980 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:11:20.980 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:11:20.980 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:11:20.980 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:20.980 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.980 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.980 [2024-11-20 08:55:36.820186] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:20.980 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.980 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:11:20.980 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.980 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.980 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.980 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:11:20.980 "tick_rate": 2300000000, 00:11:20.980 "poll_groups": [ 00:11:20.980 { 00:11:20.980 "name": "nvmf_tgt_poll_group_000", 00:11:20.980 "admin_qpairs": 0, 00:11:20.980 "io_qpairs": 0, 00:11:20.980 "current_admin_qpairs": 0, 00:11:20.980 "current_io_qpairs": 0, 00:11:20.980 "pending_bdev_io": 0, 00:11:20.980 "completed_nvme_io": 0, 00:11:20.980 "transports": [ 00:11:20.980 { 00:11:20.980 "trtype": "TCP" 00:11:20.980 } 00:11:20.980 ] 00:11:20.980 }, 00:11:20.980 { 00:11:20.980 "name": "nvmf_tgt_poll_group_001", 00:11:20.980 "admin_qpairs": 0, 00:11:20.980 "io_qpairs": 0, 00:11:20.980 "current_admin_qpairs": 0, 00:11:20.980 "current_io_qpairs": 0, 00:11:20.980 "pending_bdev_io": 0, 00:11:20.980 "completed_nvme_io": 0, 00:11:20.980 "transports": [ 00:11:20.980 { 00:11:20.980 "trtype": "TCP" 00:11:20.980 } 00:11:20.980 ] 00:11:20.980 }, 00:11:20.980 { 00:11:20.980 "name": "nvmf_tgt_poll_group_002", 00:11:20.980 "admin_qpairs": 0, 00:11:20.980 "io_qpairs": 0, 00:11:20.980 "current_admin_qpairs": 0, 00:11:20.980 "current_io_qpairs": 0, 00:11:20.980 "pending_bdev_io": 0, 00:11:20.980 "completed_nvme_io": 0, 00:11:20.980 "transports": [ 00:11:20.980 { 00:11:20.980 "trtype": "TCP" 00:11:20.980 } 00:11:20.980 ] 00:11:20.980 }, 00:11:20.980 { 00:11:20.980 "name": "nvmf_tgt_poll_group_003", 00:11:20.980 "admin_qpairs": 0, 00:11:20.980 "io_qpairs": 0, 00:11:20.981 "current_admin_qpairs": 0, 00:11:20.981 "current_io_qpairs": 0, 00:11:20.981 "pending_bdev_io": 0, 00:11:20.981 "completed_nvme_io": 0, 00:11:20.981 "transports": [ 00:11:20.981 { 00:11:20.981 "trtype": "TCP" 00:11:20.981 } 00:11:20.981 ] 00:11:20.981 } 00:11:20.981 ] 00:11:20.981 }' 00:11:20.981 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:11:20.981 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:20.981 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:20.981 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:20.981 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:11:20.981 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:11:20.981 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:20.981 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:20.981 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:20.981 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:11:20.981 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:11:20.981 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:11:20.981 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:11:20.981 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:11:20.981 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.981 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.981 Malloc1 00:11:20.981 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.981 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:20.981 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.981 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.981 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.981 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:20.981 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.981 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.981 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.981 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:11:20.981 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.981 08:55:36 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.981 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.981 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:20.981 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.981 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:20.981 [2024-11-20 08:55:37.006631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:20.981 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.981 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:20.981 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:20.981 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:20.981 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:20.981 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:20.981 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:20.981 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:20.981 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:20.981 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:20.981 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:20.981 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:20.981 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:11:21.239 [2024-11-20 08:55:37.041288] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:21.239 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:21.239 could not add new controller: failed to write to nvme-fabrics device 00:11:21.239 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:21.239 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:21.239 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:21.239 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:21.239 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:21.239 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:21.239 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:21.239 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:21.239 08:55:37 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:22.612 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:11:22.612 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:22.612 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:22.612 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:22.612 08:55:38 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:24.510 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:24.510 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:24.510 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:24.510 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:24.510 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:24.510 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:24.510 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:24.510 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:24.510 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:24.510 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:24.510 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:24.510 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:24.510 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:24.510 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:24.510 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:24.511 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:24.511 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.511 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.511 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.511 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:24.511 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:11:24.511 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:24.511 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:11:24.511 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:24.511 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:11:24.511 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:24.511 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:11:24.511 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:11:24.511 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:11:24.511 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:11:24.511 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:24.511 [2024-11-20 08:55:40.476730] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:11:24.511 Failed to write to /dev/nvme-fabrics: Input/output error 00:11:24.511 could not add new controller: failed to write to nvme-fabrics device 00:11:24.511 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:11:24.511 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:11:24.511 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:11:24.511 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:11:24.511 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:11:24.511 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:24.511 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:24.511 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:24.511 08:55:40 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:25.886 08:55:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:11:25.887 08:55:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:25.887 08:55:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:25.887 08:55:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:25.887 08:55:41 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:27.785 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:27.786 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:27.786 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:27.786 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:27.786 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:27.786 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:27.786 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:27.786 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:27.786 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:27.786 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:27.786 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:27.786 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:27.786 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:27.786 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:27.786 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:27.786 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:27.786 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:27.786 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:27.786 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:27.786 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:27.786 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:27.786 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:28.043 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.043 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.043 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.043 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.043 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.043 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.043 [2024-11-20 08:55:43.842631] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.043 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.043 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:28.043 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.044 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.044 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.044 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:28.044 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:28.044 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:28.044 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:28.044 08:55:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:28.977 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:28.977 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:28.977 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:28.977 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:28.977 08:55:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:31.508 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:31.508 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:31.508 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:31.508 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:31.508 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:31.508 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:31.508 08:55:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:31.508 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.508 [2024-11-20 08:55:47.138539] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.508 08:55:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:32.441 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:32.441 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:32.441 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:32.441 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:32.441 08:55:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:34.356 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:34.356 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:34.356 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:34.356 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:34.356 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:34.356 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:34.356 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:34.628 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.628 [2024-11-20 08:55:50.469461] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:34.628 08:55:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:35.561 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:35.561 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:35.561 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:35.561 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:35.561 08:55:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:38.090 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.090 [2024-11-20 08:55:53.867827] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:38.090 08:55:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:39.024 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:39.024 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:39.024 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:39.024 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:39.024 08:55:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:41.545 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:41.545 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:41.545 08:55:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:41.545 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.545 [2024-11-20 08:55:57.179405] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.545 08:55:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:42.478 08:55:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:42.478 08:55:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:11:42.478 08:55:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:11:42.479 08:55:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:11:42.479 08:55:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:11:44.377 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:11:44.377 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:11:44.377 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:11:44.377 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:11:44.378 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:11:44.378 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:11:44.378 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:44.636 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.636 [2024-11-20 08:56:00.576685] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.636 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.637 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.637 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.637 [2024-11-20 08:56:00.624790] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.637 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.637 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:44.637 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.637 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.637 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.637 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:44.637 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.637 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.637 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.637 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.637 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.637 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.637 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.637 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:44.637 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.637 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.637 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.637 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:44.637 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:44.637 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.637 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.637 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.637 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.637 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.637 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.637 [2024-11-20 08:56:00.672928] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.896 [2024-11-20 08:56:00.721093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.896 [2024-11-20 08:56:00.769258] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:44.896 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:44.896 "tick_rate": 2300000000, 00:11:44.896 "poll_groups": [ 00:11:44.896 { 00:11:44.896 "name": "nvmf_tgt_poll_group_000", 00:11:44.896 "admin_qpairs": 2, 00:11:44.896 "io_qpairs": 168, 00:11:44.896 "current_admin_qpairs": 0, 00:11:44.896 "current_io_qpairs": 0, 00:11:44.896 "pending_bdev_io": 0, 00:11:44.896 "completed_nvme_io": 267, 00:11:44.896 "transports": [ 00:11:44.896 { 00:11:44.896 "trtype": "TCP" 00:11:44.896 } 00:11:44.896 ] 00:11:44.896 }, 00:11:44.897 { 00:11:44.897 "name": "nvmf_tgt_poll_group_001", 00:11:44.897 "admin_qpairs": 2, 00:11:44.897 "io_qpairs": 168, 00:11:44.897 "current_admin_qpairs": 0, 00:11:44.897 "current_io_qpairs": 0, 00:11:44.897 "pending_bdev_io": 0, 00:11:44.897 "completed_nvme_io": 220, 00:11:44.897 "transports": [ 00:11:44.897 { 00:11:44.897 "trtype": "TCP" 00:11:44.897 } 00:11:44.897 ] 00:11:44.897 }, 00:11:44.897 { 00:11:44.897 "name": "nvmf_tgt_poll_group_002", 00:11:44.897 "admin_qpairs": 1, 00:11:44.897 "io_qpairs": 168, 00:11:44.897 "current_admin_qpairs": 0, 00:11:44.897 "current_io_qpairs": 0, 00:11:44.897 "pending_bdev_io": 0, 00:11:44.897 "completed_nvme_io": 267, 00:11:44.897 "transports": [ 00:11:44.897 { 00:11:44.897 "trtype": "TCP" 00:11:44.897 } 00:11:44.897 ] 00:11:44.897 }, 00:11:44.897 { 00:11:44.897 "name": "nvmf_tgt_poll_group_003", 00:11:44.897 "admin_qpairs": 2, 00:11:44.897 "io_qpairs": 168, 00:11:44.897 "current_admin_qpairs": 0, 00:11:44.897 "current_io_qpairs": 0, 00:11:44.897 "pending_bdev_io": 0, 00:11:44.897 "completed_nvme_io": 268, 00:11:44.897 "transports": [ 00:11:44.897 { 00:11:44.897 "trtype": "TCP" 00:11:44.897 } 00:11:44.897 ] 00:11:44.897 } 00:11:44.897 ] 00:11:44.897 }' 00:11:44.897 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:44.897 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:44.897 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:44.897 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:44.897 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:44.897 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:44.897 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:44.897 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:44.897 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:44.897 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:11:44.897 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:44.897 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:44.897 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:44.897 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # nvmfcleanup 00:11:44.897 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@99 -- # sync 00:11:44.897 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:11:44.897 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # set +e 00:11:44.897 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:44.897 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:11:44.897 rmmod nvme_tcp 00:11:45.156 rmmod nvme_fabrics 00:11:45.156 rmmod nvme_keyring 00:11:45.156 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:45.156 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # set -e 00:11:45.156 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # return 0 00:11:45.156 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # '[' -n 2267035 ']' 00:11:45.156 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@337 -- # killprocess 2267035 00:11:45.156 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 2267035 ']' 00:11:45.156 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 2267035 00:11:45.156 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:11:45.156 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:45.156 08:56:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2267035 00:11:45.156 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:45.156 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:45.156 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2267035' 00:11:45.156 killing process with pid 2267035 00:11:45.156 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 2267035 00:11:45.156 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 2267035 00:11:45.414 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:11:45.414 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # nvmf_fini 00:11:45.414 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@264 -- # local dev 00:11:45.414 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@267 -- # remove_target_ns 00:11:45.414 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:45.414 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:45.414 08:56:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@268 -- # delete_main_bridge 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@130 -- # return 0 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@41 -- # _dev=0 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@41 -- # dev_map=() 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@284 -- # iptr 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@542 -- # iptables-save 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@542 -- # iptables-restore 00:11:47.319 00:11:47.319 real 0m33.833s 00:11:47.319 user 1m42.614s 00:11:47.319 sys 0m6.579s 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:47.319 ************************************ 00:11:47.319 END TEST nvmf_rpc 00:11:47.319 ************************************ 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:47.319 08:56:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:47.580 ************************************ 00:11:47.580 START TEST nvmf_invalid 00:11:47.580 ************************************ 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:47.580 * Looking for test storage... 00:11:47.580 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:47.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.580 --rc genhtml_branch_coverage=1 00:11:47.580 --rc genhtml_function_coverage=1 00:11:47.580 --rc genhtml_legend=1 00:11:47.580 --rc geninfo_all_blocks=1 00:11:47.580 --rc geninfo_unexecuted_blocks=1 00:11:47.580 00:11:47.580 ' 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:47.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.580 --rc genhtml_branch_coverage=1 00:11:47.580 --rc genhtml_function_coverage=1 00:11:47.580 --rc genhtml_legend=1 00:11:47.580 --rc geninfo_all_blocks=1 00:11:47.580 --rc geninfo_unexecuted_blocks=1 00:11:47.580 00:11:47.580 ' 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:47.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.580 --rc genhtml_branch_coverage=1 00:11:47.580 --rc genhtml_function_coverage=1 00:11:47.580 --rc genhtml_legend=1 00:11:47.580 --rc geninfo_all_blocks=1 00:11:47.580 --rc geninfo_unexecuted_blocks=1 00:11:47.580 00:11:47.580 ' 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:47.580 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:47.580 --rc genhtml_branch_coverage=1 00:11:47.580 --rc genhtml_function_coverage=1 00:11:47.580 --rc genhtml_legend=1 00:11:47.580 --rc geninfo_all_blocks=1 00:11:47.580 --rc geninfo_unexecuted_blocks=1 00:11:47.580 00:11:47.580 ' 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:47.580 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@50 -- # : 0 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:47.581 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # prepare_net_devs 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # local -g is_hw=no 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # remove_target_ns 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # xtrace_disable 00:11:47.581 08:56:03 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@131 -- # pci_devs=() 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@131 -- # local -a pci_devs 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@132 -- # pci_net_devs=() 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@133 -- # pci_drivers=() 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@133 -- # local -A pci_drivers 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@135 -- # net_devs=() 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@135 -- # local -ga net_devs 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@136 -- # e810=() 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@136 -- # local -ga e810 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@137 -- # x722=() 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@137 -- # local -ga x722 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@138 -- # mlx=() 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@138 -- # local -ga mlx 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:54.154 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:54.154 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:54.154 Found net devices under 0000:86:00.0: cvl_0_0 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:54.154 Found net devices under 0000:86:00.1: cvl_0_1 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # is_hw=yes 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@257 -- # create_target_ns 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@28 -- # local -g _dev 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@44 -- # ips=() 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@11 -- # local val=167772161 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:11:54.154 10.0.0.1 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:54.154 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@11 -- # local val=167772162 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:11:54.155 10.0.0.2 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@38 -- # ping_ips 1 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # local dev=initiator0 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:11:54.155 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:54.155 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.473 ms 00:11:54.155 00:11:54.155 --- 10.0.0.1 ping statistics --- 00:11:54.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.155 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # local dev=target0 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:11:54.155 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:54.155 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:11:54.155 00:11:54.155 --- 10.0.0.2 ping statistics --- 00:11:54.155 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:54.155 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # (( pair++ )) 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # return 0 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # local dev=initiator0 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # local dev=initiator1 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # return 1 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # dev= 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@169 -- # return 0 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # local dev=target0 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:54.155 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # get_net_dev target1 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # local dev=target1 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # return 1 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # dev= 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@169 -- # return 0 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # nvmfpid=2274890 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # waitforlisten 2274890 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 2274890 ']' 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:54.156 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:54.156 [2024-11-20 08:56:09.731721] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:11:54.156 [2024-11-20 08:56:09.731772] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:54.156 [2024-11-20 08:56:09.810189] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:54.156 [2024-11-20 08:56:09.854851] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:54.156 [2024-11-20 08:56:09.854883] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:54.156 [2024-11-20 08:56:09.854890] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:54.156 [2024-11-20 08:56:09.854897] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:54.156 [2024-11-20 08:56:09.854902] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:54.156 [2024-11-20 08:56:09.856366] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:54.156 [2024-11-20 08:56:09.856476] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:54.156 [2024-11-20 08:56:09.856583] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:54.156 [2024-11-20 08:56:09.856584] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:54.156 08:56:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode32037 00:11:54.156 [2024-11-20 08:56:10.170011] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:54.414 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:54.414 { 00:11:54.414 "nqn": "nqn.2016-06.io.spdk:cnode32037", 00:11:54.414 "tgt_name": "foobar", 00:11:54.414 "method": "nvmf_create_subsystem", 00:11:54.414 "req_id": 1 00:11:54.414 } 00:11:54.414 Got JSON-RPC error response 00:11:54.414 response: 00:11:54.414 { 00:11:54.414 "code": -32603, 00:11:54.414 "message": "Unable to find target foobar" 00:11:54.414 }' 00:11:54.414 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:54.414 { 00:11:54.414 "nqn": "nqn.2016-06.io.spdk:cnode32037", 00:11:54.414 "tgt_name": "foobar", 00:11:54.414 "method": "nvmf_create_subsystem", 00:11:54.414 "req_id": 1 00:11:54.414 } 00:11:54.414 Got JSON-RPC error response 00:11:54.414 response: 00:11:54.414 { 00:11:54.414 "code": -32603, 00:11:54.414 "message": "Unable to find target foobar" 00:11:54.414 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:54.414 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:54.414 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode5227 00:11:54.414 [2024-11-20 08:56:10.382793] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5227: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:54.414 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:11:54.414 { 00:11:54.414 "nqn": "nqn.2016-06.io.spdk:cnode5227", 00:11:54.414 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:54.414 "method": "nvmf_create_subsystem", 00:11:54.414 "req_id": 1 00:11:54.414 } 00:11:54.414 Got JSON-RPC error response 00:11:54.414 response: 00:11:54.414 { 00:11:54.414 "code": -32602, 00:11:54.414 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:54.414 }' 00:11:54.414 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:11:54.414 { 00:11:54.414 "nqn": "nqn.2016-06.io.spdk:cnode5227", 00:11:54.414 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:54.414 "method": "nvmf_create_subsystem", 00:11:54.414 "req_id": 1 00:11:54.414 } 00:11:54.414 Got JSON-RPC error response 00:11:54.414 response: 00:11:54.414 { 00:11:54.414 "code": -32602, 00:11:54.414 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:54.414 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:54.414 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:54.414 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode15670 00:11:54.672 [2024-11-20 08:56:10.595512] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15670: invalid model number 'SPDK_Controller' 00:11:54.672 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:11:54.672 { 00:11:54.672 "nqn": "nqn.2016-06.io.spdk:cnode15670", 00:11:54.672 "model_number": "SPDK_Controller\u001f", 00:11:54.672 "method": "nvmf_create_subsystem", 00:11:54.672 "req_id": 1 00:11:54.672 } 00:11:54.672 Got JSON-RPC error response 00:11:54.672 response: 00:11:54.672 { 00:11:54.672 "code": -32602, 00:11:54.672 "message": "Invalid MN SPDK_Controller\u001f" 00:11:54.672 }' 00:11:54.672 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:11:54.672 { 00:11:54.672 "nqn": "nqn.2016-06.io.spdk:cnode15670", 00:11:54.672 "model_number": "SPDK_Controller\u001f", 00:11:54.672 "method": "nvmf_create_subsystem", 00:11:54.672 "req_id": 1 00:11:54.672 } 00:11:54.672 Got JSON-RPC error response 00:11:54.672 response: 00:11:54.672 { 00:11:54.672 "code": -32602, 00:11:54.673 "message": "Invalid MN SPDK_Controller\u001f" 00:11:54.673 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 74 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4a' 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=J 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 41 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x29' 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=')' 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.673 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.931 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:11:54.931 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:54.931 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:11:54.931 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.931 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.931 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:11:54.931 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:11:54.931 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:11:54.931 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ X == \- ]] 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'X7r9-wJrbCMu)LnTe=c;x' 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'X7r9-wJrbCMu)LnTe=c;x' nqn.2016-06.io.spdk:cnode18264 00:11:54.932 [2024-11-20 08:56:10.940703] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode18264: invalid serial number 'X7r9-wJrbCMu)LnTe=c;x' 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:11:54.932 { 00:11:54.932 "nqn": "nqn.2016-06.io.spdk:cnode18264", 00:11:54.932 "serial_number": "X7r9-wJrbCMu)LnTe=c;x", 00:11:54.932 "method": "nvmf_create_subsystem", 00:11:54.932 "req_id": 1 00:11:54.932 } 00:11:54.932 Got JSON-RPC error response 00:11:54.932 response: 00:11:54.932 { 00:11:54.932 "code": -32602, 00:11:54.932 "message": "Invalid SN X7r9-wJrbCMu)LnTe=c;x" 00:11:54.932 }' 00:11:54.932 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:11:54.932 { 00:11:54.932 "nqn": "nqn.2016-06.io.spdk:cnode18264", 00:11:54.932 "serial_number": "X7r9-wJrbCMu)LnTe=c;x", 00:11:54.932 "method": "nvmf_create_subsystem", 00:11:54.932 "req_id": 1 00:11:54.932 } 00:11:54.932 Got JSON-RPC error response 00:11:54.932 response: 00:11:54.932 { 00:11:54.932 "code": -32602, 00:11:54.932 "message": "Invalid SN X7r9-wJrbCMu)LnTe=c;x" 00:11:54.932 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:55.197 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:55.197 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:55.197 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:55.197 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:55.197 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:55.197 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:55.197 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.197 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:11:55.197 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:11:55.197 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:11:55.197 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.197 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.197 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 43 00:11:55.197 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2b' 00:11:55.197 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=+ 00:11:55.197 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.197 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.197 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:11:55.197 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:11:55.197 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:11:55.197 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.197 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.197 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:11:55.197 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:11:55.197 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:11:55.197 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.197 08:56:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 77 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4d' 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=M 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:11:55.197 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:11:55.198 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:11:55.457 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:11:55.457 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.457 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.457 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:11:55.457 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:11:55.457 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:11:55.457 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:55.457 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:55.457 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ Z == \- ]] 00:11:55.457 08:56:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Z+vCF2>NLqT'\''M"!NLqT'\''M"!NLqT'M"!\u007fNLqT'\''M\"!\u007fNLqT'M\"!\u007fNLqT'M\"! /dev/null' 00:11:57.521 08:56:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:00.058 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@268 -- # delete_main_bridge 00:12:00.058 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:00.058 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@130 -- # return 0 00:12:00.058 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:00.058 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:12:00.058 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:00.058 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:12:00.058 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:12:00.058 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:00.058 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:12:00.058 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:12:00.058 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:00.058 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:12:00.058 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:00.058 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:12:00.058 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:12:00.058 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:00.058 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:12:00.058 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:12:00.058 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:12:00.058 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@41 -- # _dev=0 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@41 -- # dev_map=() 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@284 -- # iptr 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@542 -- # iptables-save 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@542 -- # iptables-restore 00:12:00.059 00:12:00.059 real 0m12.175s 00:12:00.059 user 0m18.787s 00:12:00.059 sys 0m5.448s 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:00.059 ************************************ 00:12:00.059 END TEST nvmf_invalid 00:12:00.059 ************************************ 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:00.059 ************************************ 00:12:00.059 START TEST nvmf_connect_stress 00:12:00.059 ************************************ 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:00.059 * Looking for test storage... 00:12:00.059 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:00.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.059 --rc genhtml_branch_coverage=1 00:12:00.059 --rc genhtml_function_coverage=1 00:12:00.059 --rc genhtml_legend=1 00:12:00.059 --rc geninfo_all_blocks=1 00:12:00.059 --rc geninfo_unexecuted_blocks=1 00:12:00.059 00:12:00.059 ' 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:00.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.059 --rc genhtml_branch_coverage=1 00:12:00.059 --rc genhtml_function_coverage=1 00:12:00.059 --rc genhtml_legend=1 00:12:00.059 --rc geninfo_all_blocks=1 00:12:00.059 --rc geninfo_unexecuted_blocks=1 00:12:00.059 00:12:00.059 ' 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:00.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.059 --rc genhtml_branch_coverage=1 00:12:00.059 --rc genhtml_function_coverage=1 00:12:00.059 --rc genhtml_legend=1 00:12:00.059 --rc geninfo_all_blocks=1 00:12:00.059 --rc geninfo_unexecuted_blocks=1 00:12:00.059 00:12:00.059 ' 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:00.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.059 --rc genhtml_branch_coverage=1 00:12:00.059 --rc genhtml_function_coverage=1 00:12:00.059 --rc genhtml_legend=1 00:12:00.059 --rc geninfo_all_blocks=1 00:12:00.059 --rc geninfo_unexecuted_blocks=1 00:12:00.059 00:12:00.059 ' 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.059 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:00.060 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.060 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:12:00.060 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:00.060 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:00.060 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:00.060 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@50 -- # : 0 00:12:00.060 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:00.060 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:00.060 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:00.060 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:00.060 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:00.060 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:00.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:00.060 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:00.060 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:00.060 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:00.060 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:00.060 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:12:00.060 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:00.060 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:12:00.060 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:12:00.060 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:12:00.060 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:00.060 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:00.060 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:00.060 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:12:00.060 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:12:00.060 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # xtrace_disable 00:12:00.060 08:56:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.633 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:06.633 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@131 -- # pci_devs=() 00:12:06.633 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@131 -- # local -a pci_devs 00:12:06.633 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@132 -- # pci_net_devs=() 00:12:06.633 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:12:06.633 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@133 -- # pci_drivers=() 00:12:06.633 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@133 -- # local -A pci_drivers 00:12:06.633 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@135 -- # net_devs=() 00:12:06.633 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@135 -- # local -ga net_devs 00:12:06.633 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@136 -- # e810=() 00:12:06.633 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@136 -- # local -ga e810 00:12:06.633 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@137 -- # x722=() 00:12:06.633 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@137 -- # local -ga x722 00:12:06.633 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@138 -- # mlx=() 00:12:06.633 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@138 -- # local -ga mlx 00:12:06.633 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:06.633 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:06.633 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:06.633 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:06.633 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:06.633 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:06.633 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:06.633 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:06.633 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:06.633 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:06.634 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:06.634 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:06.634 Found net devices under 0000:86:00.0: cvl_0_0 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:06.634 Found net devices under 0000:86:00.1: cvl_0_1 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # is_hw=yes 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@257 -- # create_target_ns 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@44 -- # ips=() 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:12:06.634 10.0.0.1 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:06.634 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:12:06.635 10.0.0.2 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@38 -- # ping_ips 1 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # local dev=initiator0 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:12:06.635 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:06.635 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.369 ms 00:12:06.635 00:12:06.635 --- 10.0.0.1 ping statistics --- 00:12:06.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.635 rtt min/avg/max/mdev = 0.369/0.369/0.369/0.000 ms 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:12:06.635 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:06.635 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:12:06.635 00:12:06.635 --- 10.0.0.2 ping statistics --- 00:12:06.635 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.635 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # (( pair++ )) 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # return 0 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:12:06.635 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # local dev=initiator0 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # local dev=initiator1 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # return 1 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # dev= 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@169 -- # return 0 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # get_net_dev target1 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # local dev=target1 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # return 1 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # dev= 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@169 -- # return 0 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # nvmfpid=2279297 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # waitforlisten 2279297 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 2279297 ']' 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.636 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:06.636 08:56:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.636 [2024-11-20 08:56:21.967966] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:12:06.636 [2024-11-20 08:56:21.968012] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:06.636 [2024-11-20 08:56:22.047651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:12:06.636 [2024-11-20 08:56:22.089486] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:06.636 [2024-11-20 08:56:22.089523] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:06.636 [2024-11-20 08:56:22.089531] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:06.636 [2024-11-20 08:56:22.089537] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:06.636 [2024-11-20 08:56:22.089542] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:06.636 [2024-11-20 08:56:22.090904] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:06.636 [2024-11-20 08:56:22.090936] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.636 [2024-11-20 08:56:22.090937] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:06.636 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:06.636 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:12:06.636 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:12:06.636 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:06.636 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.636 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.636 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.637 [2024-11-20 08:56:22.227683] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.637 [2024-11-20 08:56:22.247885] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.637 NULL1 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2279320 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.637 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:06.895 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:06.895 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:06.895 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:06.895 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:06.895 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.153 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.153 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:07.153 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.153 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.153 08:56:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.410 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.410 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:07.410 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.410 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.410 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:07.668 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:07.668 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:07.668 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:07.668 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:07.668 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.233 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.233 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:08.233 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.233 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.233 08:56:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.491 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.491 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:08.491 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.491 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.491 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:08.749 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:08.749 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:08.749 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:08.749 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:08.749 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.007 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.007 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:09.007 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.007 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.007 08:56:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.264 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.264 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:09.264 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.264 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.264 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:09.830 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:09.830 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:09.830 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:09.830 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:09.830 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.086 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.086 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:10.086 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.086 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.086 08:56:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.343 08:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.343 08:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:10.343 08:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.343 08:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.343 08:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:10.601 08:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:10.601 08:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:10.601 08:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:10.601 08:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:10.601 08:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.166 08:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.166 08:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:11.166 08:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.166 08:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.166 08:56:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.423 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.423 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:11.423 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.423 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.423 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.681 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.681 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:11.681 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.681 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.681 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:11.939 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:11.939 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:11.939 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:11.939 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:11.939 08:56:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.197 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.197 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:12.197 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.197 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.197 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:12.762 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:12.762 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:12.762 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:12.762 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:12.762 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.020 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.020 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:13.020 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.020 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.020 08:56:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.277 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.277 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:13.277 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.277 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.277 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:13.534 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:13.534 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:13.534 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:13.534 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:13.534 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.099 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.099 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:14.099 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.099 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.099 08:56:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.357 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.357 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:14.357 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.357 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.357 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.615 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.615 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:14.615 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.615 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.615 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:14.873 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:14.873 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:14.873 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:14.873 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:14.873 08:56:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.129 08:56:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.129 08:56:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:15.129 08:56:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.129 08:56:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.129 08:56:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.708 08:56:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.708 08:56:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:15.708 08:56:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.708 08:56:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.708 08:56:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:15.986 08:56:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:15.986 08:56:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:15.986 08:56:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:15.986 08:56:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:15.986 08:56:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.258 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.258 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:16.259 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.259 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.259 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.516 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.516 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:16.516 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:12:16.516 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.516 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:16.516 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:12:16.773 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.773 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2279320 00:12:16.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2279320) - No such process 00:12:16.773 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2279320 00:12:16.773 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:12:16.773 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:12:16.773 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:12:16.773 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:12:16.773 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@99 -- # sync 00:12:16.773 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:12:16.773 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # set +e 00:12:16.773 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:12:16.773 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:12:16.773 rmmod nvme_tcp 00:12:16.773 rmmod nvme_fabrics 00:12:16.773 rmmod nvme_keyring 00:12:17.032 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:12:17.032 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # set -e 00:12:17.032 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # return 0 00:12:17.032 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # '[' -n 2279297 ']' 00:12:17.032 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@337 -- # killprocess 2279297 00:12:17.032 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 2279297 ']' 00:12:17.032 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 2279297 00:12:17.032 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:12:17.032 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:17.033 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2279297 00:12:17.033 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:17.033 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:17.033 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2279297' 00:12:17.033 killing process with pid 2279297 00:12:17.033 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 2279297 00:12:17.033 08:56:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 2279297 00:12:17.033 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:12:17.033 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:12:17.033 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@264 -- # local dev 00:12:17.033 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@267 -- # remove_target_ns 00:12:17.033 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:17.033 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:17.033 08:56:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:19.569 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@268 -- # delete_main_bridge 00:12:19.569 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:19.569 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@130 -- # return 0 00:12:19.569 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:19.569 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:12:19.569 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:19.569 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:12:19.569 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:12:19.569 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:19.569 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:12:19.569 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:12:19.569 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:19.569 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:12:19.569 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:19.569 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@41 -- # _dev=0 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@284 -- # iptr 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@542 -- # iptables-save 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@542 -- # iptables-restore 00:12:19.570 00:12:19.570 real 0m19.514s 00:12:19.570 user 0m40.574s 00:12:19.570 sys 0m8.597s 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:12:19.570 ************************************ 00:12:19.570 END TEST nvmf_connect_stress 00:12:19.570 ************************************ 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:19.570 ************************************ 00:12:19.570 START TEST nvmf_fused_ordering 00:12:19.570 ************************************ 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:12:19.570 * Looking for test storage... 00:12:19.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:19.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.570 --rc genhtml_branch_coverage=1 00:12:19.570 --rc genhtml_function_coverage=1 00:12:19.570 --rc genhtml_legend=1 00:12:19.570 --rc geninfo_all_blocks=1 00:12:19.570 --rc geninfo_unexecuted_blocks=1 00:12:19.570 00:12:19.570 ' 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:19.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.570 --rc genhtml_branch_coverage=1 00:12:19.570 --rc genhtml_function_coverage=1 00:12:19.570 --rc genhtml_legend=1 00:12:19.570 --rc geninfo_all_blocks=1 00:12:19.570 --rc geninfo_unexecuted_blocks=1 00:12:19.570 00:12:19.570 ' 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:19.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.570 --rc genhtml_branch_coverage=1 00:12:19.570 --rc genhtml_function_coverage=1 00:12:19.570 --rc genhtml_legend=1 00:12:19.570 --rc geninfo_all_blocks=1 00:12:19.570 --rc geninfo_unexecuted_blocks=1 00:12:19.570 00:12:19.570 ' 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:19.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:19.570 --rc genhtml_branch_coverage=1 00:12:19.570 --rc genhtml_function_coverage=1 00:12:19.570 --rc genhtml_legend=1 00:12:19.570 --rc geninfo_all_blocks=1 00:12:19.570 --rc geninfo_unexecuted_blocks=1 00:12:19.570 00:12:19.570 ' 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:19.570 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@50 -- # : 0 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:19.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # prepare_net_devs 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # local -g is_hw=no 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # remove_target_ns 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # xtrace_disable 00:12:19.571 08:56:35 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@131 -- # pci_devs=() 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@131 -- # local -a pci_devs 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@132 -- # pci_net_devs=() 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@133 -- # pci_drivers=() 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@133 -- # local -A pci_drivers 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@135 -- # net_devs=() 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@135 -- # local -ga net_devs 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@136 -- # e810=() 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@136 -- # local -ga e810 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@137 -- # x722=() 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@137 -- # local -ga x722 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@138 -- # mlx=() 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@138 -- # local -ga mlx 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:26.144 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:26.144 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:26.144 Found net devices under 0000:86:00.0: cvl_0_0 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:26.144 Found net devices under 0000:86:00.1: cvl_0_1 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # is_hw=yes 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:12:26.144 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@257 -- # create_target_ns 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@27 -- # local -gA dev_map 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@28 -- # local -g _dev 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@44 -- # ips=() 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@11 -- # local val=167772161 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:12:26.145 10.0.0.1 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@11 -- # local val=167772162 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:12:26.145 10.0.0.2 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@38 -- # ping_ips 1 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # local dev=initiator0 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:12:26.145 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:12:26.146 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:26.146 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.481 ms 00:12:26.146 00:12:26.146 --- 10.0.0.1 ping statistics --- 00:12:26.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.146 rtt min/avg/max/mdev = 0.481/0.481/0.481/0.000 ms 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # local dev=target0 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:12:26.146 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:26.146 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:12:26.146 00:12:26.146 --- 10.0.0.2 ping statistics --- 00:12:26.146 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:26.146 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # (( pair++ )) 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # return 0 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # local dev=initiator0 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # local dev=initiator1 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # return 1 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # dev= 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@169 -- # return 0 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # local dev=target0 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # get_net_dev target1 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # local dev=target1 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # return 1 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # dev= 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@169 -- # return 0 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:12:26.146 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # nvmfpid=2284720 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # waitforlisten 2284720 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 2284720 ']' 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:26.147 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:26.147 [2024-11-20 08:56:41.560836] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:12:26.147 [2024-11-20 08:56:41.560885] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:26.147 [2024-11-20 08:56:41.641015] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:26.147 [2024-11-20 08:56:41.680443] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:26.147 [2024-11-20 08:56:41.680479] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:26.147 [2024-11-20 08:56:41.680486] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:26.147 [2024-11-20 08:56:41.680492] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:26.147 [2024-11-20 08:56:41.680497] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:26.147 [2024-11-20 08:56:41.681064] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:26.147 [2024-11-20 08:56:41.828231] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:26.147 [2024-11-20 08:56:41.848422] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:26.147 NULL1 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.147 08:56:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:12:26.147 [2024-11-20 08:56:41.908126] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:12:26.147 [2024-11-20 08:56:41.908158] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2284741 ] 00:12:26.406 Attached to nqn.2016-06.io.spdk:cnode1 00:12:26.406 Namespace ID: 1 size: 1GB 00:12:26.406 fused_ordering(0) 00:12:26.406 fused_ordering(1) 00:12:26.406 fused_ordering(2) 00:12:26.406 fused_ordering(3) 00:12:26.406 fused_ordering(4) 00:12:26.406 fused_ordering(5) 00:12:26.406 fused_ordering(6) 00:12:26.406 fused_ordering(7) 00:12:26.406 fused_ordering(8) 00:12:26.406 fused_ordering(9) 00:12:26.406 fused_ordering(10) 00:12:26.406 fused_ordering(11) 00:12:26.406 fused_ordering(12) 00:12:26.406 fused_ordering(13) 00:12:26.406 fused_ordering(14) 00:12:26.406 fused_ordering(15) 00:12:26.406 fused_ordering(16) 00:12:26.406 fused_ordering(17) 00:12:26.406 fused_ordering(18) 00:12:26.406 fused_ordering(19) 00:12:26.406 fused_ordering(20) 00:12:26.406 fused_ordering(21) 00:12:26.406 fused_ordering(22) 00:12:26.406 fused_ordering(23) 00:12:26.406 fused_ordering(24) 00:12:26.406 fused_ordering(25) 00:12:26.406 fused_ordering(26) 00:12:26.406 fused_ordering(27) 00:12:26.406 fused_ordering(28) 00:12:26.406 fused_ordering(29) 00:12:26.406 fused_ordering(30) 00:12:26.406 fused_ordering(31) 00:12:26.406 fused_ordering(32) 00:12:26.406 fused_ordering(33) 00:12:26.406 fused_ordering(34) 00:12:26.406 fused_ordering(35) 00:12:26.406 fused_ordering(36) 00:12:26.406 fused_ordering(37) 00:12:26.406 fused_ordering(38) 00:12:26.406 fused_ordering(39) 00:12:26.406 fused_ordering(40) 00:12:26.406 fused_ordering(41) 00:12:26.406 fused_ordering(42) 00:12:26.406 fused_ordering(43) 00:12:26.406 fused_ordering(44) 00:12:26.406 fused_ordering(45) 00:12:26.406 fused_ordering(46) 00:12:26.406 fused_ordering(47) 00:12:26.406 fused_ordering(48) 00:12:26.406 fused_ordering(49) 00:12:26.406 fused_ordering(50) 00:12:26.406 fused_ordering(51) 00:12:26.406 fused_ordering(52) 00:12:26.406 fused_ordering(53) 00:12:26.406 fused_ordering(54) 00:12:26.406 fused_ordering(55) 00:12:26.406 fused_ordering(56) 00:12:26.406 fused_ordering(57) 00:12:26.406 fused_ordering(58) 00:12:26.406 fused_ordering(59) 00:12:26.406 fused_ordering(60) 00:12:26.406 fused_ordering(61) 00:12:26.406 fused_ordering(62) 00:12:26.406 fused_ordering(63) 00:12:26.406 fused_ordering(64) 00:12:26.406 fused_ordering(65) 00:12:26.406 fused_ordering(66) 00:12:26.406 fused_ordering(67) 00:12:26.406 fused_ordering(68) 00:12:26.406 fused_ordering(69) 00:12:26.406 fused_ordering(70) 00:12:26.406 fused_ordering(71) 00:12:26.406 fused_ordering(72) 00:12:26.406 fused_ordering(73) 00:12:26.406 fused_ordering(74) 00:12:26.406 fused_ordering(75) 00:12:26.406 fused_ordering(76) 00:12:26.406 fused_ordering(77) 00:12:26.406 fused_ordering(78) 00:12:26.406 fused_ordering(79) 00:12:26.406 fused_ordering(80) 00:12:26.406 fused_ordering(81) 00:12:26.406 fused_ordering(82) 00:12:26.406 fused_ordering(83) 00:12:26.406 fused_ordering(84) 00:12:26.406 fused_ordering(85) 00:12:26.406 fused_ordering(86) 00:12:26.406 fused_ordering(87) 00:12:26.406 fused_ordering(88) 00:12:26.407 fused_ordering(89) 00:12:26.407 fused_ordering(90) 00:12:26.407 fused_ordering(91) 00:12:26.407 fused_ordering(92) 00:12:26.407 fused_ordering(93) 00:12:26.407 fused_ordering(94) 00:12:26.407 fused_ordering(95) 00:12:26.407 fused_ordering(96) 00:12:26.407 fused_ordering(97) 00:12:26.407 fused_ordering(98) 00:12:26.407 fused_ordering(99) 00:12:26.407 fused_ordering(100) 00:12:26.407 fused_ordering(101) 00:12:26.407 fused_ordering(102) 00:12:26.407 fused_ordering(103) 00:12:26.407 fused_ordering(104) 00:12:26.407 fused_ordering(105) 00:12:26.407 fused_ordering(106) 00:12:26.407 fused_ordering(107) 00:12:26.407 fused_ordering(108) 00:12:26.407 fused_ordering(109) 00:12:26.407 fused_ordering(110) 00:12:26.407 fused_ordering(111) 00:12:26.407 fused_ordering(112) 00:12:26.407 fused_ordering(113) 00:12:26.407 fused_ordering(114) 00:12:26.407 fused_ordering(115) 00:12:26.407 fused_ordering(116) 00:12:26.407 fused_ordering(117) 00:12:26.407 fused_ordering(118) 00:12:26.407 fused_ordering(119) 00:12:26.407 fused_ordering(120) 00:12:26.407 fused_ordering(121) 00:12:26.407 fused_ordering(122) 00:12:26.407 fused_ordering(123) 00:12:26.407 fused_ordering(124) 00:12:26.407 fused_ordering(125) 00:12:26.407 fused_ordering(126) 00:12:26.407 fused_ordering(127) 00:12:26.407 fused_ordering(128) 00:12:26.407 fused_ordering(129) 00:12:26.407 fused_ordering(130) 00:12:26.407 fused_ordering(131) 00:12:26.407 fused_ordering(132) 00:12:26.407 fused_ordering(133) 00:12:26.407 fused_ordering(134) 00:12:26.407 fused_ordering(135) 00:12:26.407 fused_ordering(136) 00:12:26.407 fused_ordering(137) 00:12:26.407 fused_ordering(138) 00:12:26.407 fused_ordering(139) 00:12:26.407 fused_ordering(140) 00:12:26.407 fused_ordering(141) 00:12:26.407 fused_ordering(142) 00:12:26.407 fused_ordering(143) 00:12:26.407 fused_ordering(144) 00:12:26.407 fused_ordering(145) 00:12:26.407 fused_ordering(146) 00:12:26.407 fused_ordering(147) 00:12:26.407 fused_ordering(148) 00:12:26.407 fused_ordering(149) 00:12:26.407 fused_ordering(150) 00:12:26.407 fused_ordering(151) 00:12:26.407 fused_ordering(152) 00:12:26.407 fused_ordering(153) 00:12:26.407 fused_ordering(154) 00:12:26.407 fused_ordering(155) 00:12:26.407 fused_ordering(156) 00:12:26.407 fused_ordering(157) 00:12:26.407 fused_ordering(158) 00:12:26.407 fused_ordering(159) 00:12:26.407 fused_ordering(160) 00:12:26.407 fused_ordering(161) 00:12:26.407 fused_ordering(162) 00:12:26.407 fused_ordering(163) 00:12:26.407 fused_ordering(164) 00:12:26.407 fused_ordering(165) 00:12:26.407 fused_ordering(166) 00:12:26.407 fused_ordering(167) 00:12:26.407 fused_ordering(168) 00:12:26.407 fused_ordering(169) 00:12:26.407 fused_ordering(170) 00:12:26.407 fused_ordering(171) 00:12:26.407 fused_ordering(172) 00:12:26.407 fused_ordering(173) 00:12:26.407 fused_ordering(174) 00:12:26.407 fused_ordering(175) 00:12:26.407 fused_ordering(176) 00:12:26.407 fused_ordering(177) 00:12:26.407 fused_ordering(178) 00:12:26.407 fused_ordering(179) 00:12:26.407 fused_ordering(180) 00:12:26.407 fused_ordering(181) 00:12:26.407 fused_ordering(182) 00:12:26.407 fused_ordering(183) 00:12:26.407 fused_ordering(184) 00:12:26.407 fused_ordering(185) 00:12:26.407 fused_ordering(186) 00:12:26.407 fused_ordering(187) 00:12:26.407 fused_ordering(188) 00:12:26.407 fused_ordering(189) 00:12:26.407 fused_ordering(190) 00:12:26.407 fused_ordering(191) 00:12:26.407 fused_ordering(192) 00:12:26.407 fused_ordering(193) 00:12:26.407 fused_ordering(194) 00:12:26.407 fused_ordering(195) 00:12:26.407 fused_ordering(196) 00:12:26.407 fused_ordering(197) 00:12:26.407 fused_ordering(198) 00:12:26.407 fused_ordering(199) 00:12:26.407 fused_ordering(200) 00:12:26.407 fused_ordering(201) 00:12:26.407 fused_ordering(202) 00:12:26.407 fused_ordering(203) 00:12:26.407 fused_ordering(204) 00:12:26.407 fused_ordering(205) 00:12:26.665 fused_ordering(206) 00:12:26.665 fused_ordering(207) 00:12:26.665 fused_ordering(208) 00:12:26.665 fused_ordering(209) 00:12:26.666 fused_ordering(210) 00:12:26.666 fused_ordering(211) 00:12:26.666 fused_ordering(212) 00:12:26.666 fused_ordering(213) 00:12:26.666 fused_ordering(214) 00:12:26.666 fused_ordering(215) 00:12:26.666 fused_ordering(216) 00:12:26.666 fused_ordering(217) 00:12:26.666 fused_ordering(218) 00:12:26.666 fused_ordering(219) 00:12:26.666 fused_ordering(220) 00:12:26.666 fused_ordering(221) 00:12:26.666 fused_ordering(222) 00:12:26.666 fused_ordering(223) 00:12:26.666 fused_ordering(224) 00:12:26.666 fused_ordering(225) 00:12:26.666 fused_ordering(226) 00:12:26.666 fused_ordering(227) 00:12:26.666 fused_ordering(228) 00:12:26.666 fused_ordering(229) 00:12:26.666 fused_ordering(230) 00:12:26.666 fused_ordering(231) 00:12:26.666 fused_ordering(232) 00:12:26.666 fused_ordering(233) 00:12:26.666 fused_ordering(234) 00:12:26.666 fused_ordering(235) 00:12:26.666 fused_ordering(236) 00:12:26.666 fused_ordering(237) 00:12:26.666 fused_ordering(238) 00:12:26.666 fused_ordering(239) 00:12:26.666 fused_ordering(240) 00:12:26.666 fused_ordering(241) 00:12:26.666 fused_ordering(242) 00:12:26.666 fused_ordering(243) 00:12:26.666 fused_ordering(244) 00:12:26.666 fused_ordering(245) 00:12:26.666 fused_ordering(246) 00:12:26.666 fused_ordering(247) 00:12:26.666 fused_ordering(248) 00:12:26.666 fused_ordering(249) 00:12:26.666 fused_ordering(250) 00:12:26.666 fused_ordering(251) 00:12:26.666 fused_ordering(252) 00:12:26.666 fused_ordering(253) 00:12:26.666 fused_ordering(254) 00:12:26.666 fused_ordering(255) 00:12:26.666 fused_ordering(256) 00:12:26.666 fused_ordering(257) 00:12:26.666 fused_ordering(258) 00:12:26.666 fused_ordering(259) 00:12:26.666 fused_ordering(260) 00:12:26.666 fused_ordering(261) 00:12:26.666 fused_ordering(262) 00:12:26.666 fused_ordering(263) 00:12:26.666 fused_ordering(264) 00:12:26.666 fused_ordering(265) 00:12:26.666 fused_ordering(266) 00:12:26.666 fused_ordering(267) 00:12:26.666 fused_ordering(268) 00:12:26.666 fused_ordering(269) 00:12:26.666 fused_ordering(270) 00:12:26.666 fused_ordering(271) 00:12:26.666 fused_ordering(272) 00:12:26.666 fused_ordering(273) 00:12:26.666 fused_ordering(274) 00:12:26.666 fused_ordering(275) 00:12:26.666 fused_ordering(276) 00:12:26.666 fused_ordering(277) 00:12:26.666 fused_ordering(278) 00:12:26.666 fused_ordering(279) 00:12:26.666 fused_ordering(280) 00:12:26.666 fused_ordering(281) 00:12:26.666 fused_ordering(282) 00:12:26.666 fused_ordering(283) 00:12:26.666 fused_ordering(284) 00:12:26.666 fused_ordering(285) 00:12:26.666 fused_ordering(286) 00:12:26.666 fused_ordering(287) 00:12:26.666 fused_ordering(288) 00:12:26.666 fused_ordering(289) 00:12:26.666 fused_ordering(290) 00:12:26.666 fused_ordering(291) 00:12:26.666 fused_ordering(292) 00:12:26.666 fused_ordering(293) 00:12:26.666 fused_ordering(294) 00:12:26.666 fused_ordering(295) 00:12:26.666 fused_ordering(296) 00:12:26.666 fused_ordering(297) 00:12:26.666 fused_ordering(298) 00:12:26.666 fused_ordering(299) 00:12:26.666 fused_ordering(300) 00:12:26.666 fused_ordering(301) 00:12:26.666 fused_ordering(302) 00:12:26.666 fused_ordering(303) 00:12:26.666 fused_ordering(304) 00:12:26.666 fused_ordering(305) 00:12:26.666 fused_ordering(306) 00:12:26.666 fused_ordering(307) 00:12:26.666 fused_ordering(308) 00:12:26.666 fused_ordering(309) 00:12:26.666 fused_ordering(310) 00:12:26.666 fused_ordering(311) 00:12:26.666 fused_ordering(312) 00:12:26.666 fused_ordering(313) 00:12:26.666 fused_ordering(314) 00:12:26.666 fused_ordering(315) 00:12:26.666 fused_ordering(316) 00:12:26.666 fused_ordering(317) 00:12:26.666 fused_ordering(318) 00:12:26.666 fused_ordering(319) 00:12:26.666 fused_ordering(320) 00:12:26.666 fused_ordering(321) 00:12:26.666 fused_ordering(322) 00:12:26.666 fused_ordering(323) 00:12:26.666 fused_ordering(324) 00:12:26.666 fused_ordering(325) 00:12:26.666 fused_ordering(326) 00:12:26.666 fused_ordering(327) 00:12:26.666 fused_ordering(328) 00:12:26.666 fused_ordering(329) 00:12:26.666 fused_ordering(330) 00:12:26.666 fused_ordering(331) 00:12:26.666 fused_ordering(332) 00:12:26.666 fused_ordering(333) 00:12:26.666 fused_ordering(334) 00:12:26.666 fused_ordering(335) 00:12:26.666 fused_ordering(336) 00:12:26.666 fused_ordering(337) 00:12:26.666 fused_ordering(338) 00:12:26.666 fused_ordering(339) 00:12:26.666 fused_ordering(340) 00:12:26.666 fused_ordering(341) 00:12:26.666 fused_ordering(342) 00:12:26.666 fused_ordering(343) 00:12:26.666 fused_ordering(344) 00:12:26.666 fused_ordering(345) 00:12:26.666 fused_ordering(346) 00:12:26.666 fused_ordering(347) 00:12:26.666 fused_ordering(348) 00:12:26.666 fused_ordering(349) 00:12:26.666 fused_ordering(350) 00:12:26.666 fused_ordering(351) 00:12:26.666 fused_ordering(352) 00:12:26.666 fused_ordering(353) 00:12:26.666 fused_ordering(354) 00:12:26.666 fused_ordering(355) 00:12:26.666 fused_ordering(356) 00:12:26.666 fused_ordering(357) 00:12:26.666 fused_ordering(358) 00:12:26.666 fused_ordering(359) 00:12:26.666 fused_ordering(360) 00:12:26.666 fused_ordering(361) 00:12:26.666 fused_ordering(362) 00:12:26.666 fused_ordering(363) 00:12:26.666 fused_ordering(364) 00:12:26.666 fused_ordering(365) 00:12:26.666 fused_ordering(366) 00:12:26.666 fused_ordering(367) 00:12:26.666 fused_ordering(368) 00:12:26.666 fused_ordering(369) 00:12:26.666 fused_ordering(370) 00:12:26.666 fused_ordering(371) 00:12:26.666 fused_ordering(372) 00:12:26.666 fused_ordering(373) 00:12:26.666 fused_ordering(374) 00:12:26.666 fused_ordering(375) 00:12:26.666 fused_ordering(376) 00:12:26.666 fused_ordering(377) 00:12:26.666 fused_ordering(378) 00:12:26.666 fused_ordering(379) 00:12:26.666 fused_ordering(380) 00:12:26.666 fused_ordering(381) 00:12:26.666 fused_ordering(382) 00:12:26.666 fused_ordering(383) 00:12:26.666 fused_ordering(384) 00:12:26.666 fused_ordering(385) 00:12:26.666 fused_ordering(386) 00:12:26.666 fused_ordering(387) 00:12:26.666 fused_ordering(388) 00:12:26.666 fused_ordering(389) 00:12:26.666 fused_ordering(390) 00:12:26.666 fused_ordering(391) 00:12:26.666 fused_ordering(392) 00:12:26.666 fused_ordering(393) 00:12:26.666 fused_ordering(394) 00:12:26.666 fused_ordering(395) 00:12:26.666 fused_ordering(396) 00:12:26.666 fused_ordering(397) 00:12:26.666 fused_ordering(398) 00:12:26.666 fused_ordering(399) 00:12:26.666 fused_ordering(400) 00:12:26.666 fused_ordering(401) 00:12:26.666 fused_ordering(402) 00:12:26.666 fused_ordering(403) 00:12:26.666 fused_ordering(404) 00:12:26.666 fused_ordering(405) 00:12:26.666 fused_ordering(406) 00:12:26.666 fused_ordering(407) 00:12:26.666 fused_ordering(408) 00:12:26.666 fused_ordering(409) 00:12:26.666 fused_ordering(410) 00:12:26.925 fused_ordering(411) 00:12:26.925 fused_ordering(412) 00:12:26.925 fused_ordering(413) 00:12:26.925 fused_ordering(414) 00:12:26.925 fused_ordering(415) 00:12:26.925 fused_ordering(416) 00:12:26.925 fused_ordering(417) 00:12:26.925 fused_ordering(418) 00:12:26.925 fused_ordering(419) 00:12:26.925 fused_ordering(420) 00:12:26.925 fused_ordering(421) 00:12:26.925 fused_ordering(422) 00:12:26.925 fused_ordering(423) 00:12:26.925 fused_ordering(424) 00:12:26.925 fused_ordering(425) 00:12:26.925 fused_ordering(426) 00:12:26.925 fused_ordering(427) 00:12:26.925 fused_ordering(428) 00:12:26.925 fused_ordering(429) 00:12:26.925 fused_ordering(430) 00:12:26.925 fused_ordering(431) 00:12:26.925 fused_ordering(432) 00:12:26.925 fused_ordering(433) 00:12:26.925 fused_ordering(434) 00:12:26.925 fused_ordering(435) 00:12:26.925 fused_ordering(436) 00:12:26.925 fused_ordering(437) 00:12:26.925 fused_ordering(438) 00:12:26.925 fused_ordering(439) 00:12:26.925 fused_ordering(440) 00:12:26.925 fused_ordering(441) 00:12:26.925 fused_ordering(442) 00:12:26.925 fused_ordering(443) 00:12:26.925 fused_ordering(444) 00:12:26.925 fused_ordering(445) 00:12:26.925 fused_ordering(446) 00:12:26.925 fused_ordering(447) 00:12:26.925 fused_ordering(448) 00:12:26.925 fused_ordering(449) 00:12:26.925 fused_ordering(450) 00:12:26.925 fused_ordering(451) 00:12:26.925 fused_ordering(452) 00:12:26.925 fused_ordering(453) 00:12:26.925 fused_ordering(454) 00:12:26.925 fused_ordering(455) 00:12:26.925 fused_ordering(456) 00:12:26.925 fused_ordering(457) 00:12:26.925 fused_ordering(458) 00:12:26.925 fused_ordering(459) 00:12:26.925 fused_ordering(460) 00:12:26.925 fused_ordering(461) 00:12:26.925 fused_ordering(462) 00:12:26.925 fused_ordering(463) 00:12:26.925 fused_ordering(464) 00:12:26.925 fused_ordering(465) 00:12:26.925 fused_ordering(466) 00:12:26.925 fused_ordering(467) 00:12:26.925 fused_ordering(468) 00:12:26.925 fused_ordering(469) 00:12:26.925 fused_ordering(470) 00:12:26.925 fused_ordering(471) 00:12:26.925 fused_ordering(472) 00:12:26.925 fused_ordering(473) 00:12:26.925 fused_ordering(474) 00:12:26.925 fused_ordering(475) 00:12:26.925 fused_ordering(476) 00:12:26.925 fused_ordering(477) 00:12:26.925 fused_ordering(478) 00:12:26.925 fused_ordering(479) 00:12:26.925 fused_ordering(480) 00:12:26.925 fused_ordering(481) 00:12:26.925 fused_ordering(482) 00:12:26.925 fused_ordering(483) 00:12:26.925 fused_ordering(484) 00:12:26.925 fused_ordering(485) 00:12:26.925 fused_ordering(486) 00:12:26.925 fused_ordering(487) 00:12:26.925 fused_ordering(488) 00:12:26.925 fused_ordering(489) 00:12:26.925 fused_ordering(490) 00:12:26.925 fused_ordering(491) 00:12:26.925 fused_ordering(492) 00:12:26.925 fused_ordering(493) 00:12:26.925 fused_ordering(494) 00:12:26.925 fused_ordering(495) 00:12:26.925 fused_ordering(496) 00:12:26.925 fused_ordering(497) 00:12:26.925 fused_ordering(498) 00:12:26.925 fused_ordering(499) 00:12:26.925 fused_ordering(500) 00:12:26.925 fused_ordering(501) 00:12:26.925 fused_ordering(502) 00:12:26.925 fused_ordering(503) 00:12:26.925 fused_ordering(504) 00:12:26.925 fused_ordering(505) 00:12:26.925 fused_ordering(506) 00:12:26.925 fused_ordering(507) 00:12:26.925 fused_ordering(508) 00:12:26.925 fused_ordering(509) 00:12:26.925 fused_ordering(510) 00:12:26.925 fused_ordering(511) 00:12:26.925 fused_ordering(512) 00:12:26.925 fused_ordering(513) 00:12:26.925 fused_ordering(514) 00:12:26.925 fused_ordering(515) 00:12:26.925 fused_ordering(516) 00:12:26.925 fused_ordering(517) 00:12:26.925 fused_ordering(518) 00:12:26.925 fused_ordering(519) 00:12:26.925 fused_ordering(520) 00:12:26.925 fused_ordering(521) 00:12:26.925 fused_ordering(522) 00:12:26.925 fused_ordering(523) 00:12:26.925 fused_ordering(524) 00:12:26.925 fused_ordering(525) 00:12:26.925 fused_ordering(526) 00:12:26.925 fused_ordering(527) 00:12:26.925 fused_ordering(528) 00:12:26.925 fused_ordering(529) 00:12:26.925 fused_ordering(530) 00:12:26.925 fused_ordering(531) 00:12:26.925 fused_ordering(532) 00:12:26.925 fused_ordering(533) 00:12:26.925 fused_ordering(534) 00:12:26.925 fused_ordering(535) 00:12:26.925 fused_ordering(536) 00:12:26.925 fused_ordering(537) 00:12:26.925 fused_ordering(538) 00:12:26.925 fused_ordering(539) 00:12:26.925 fused_ordering(540) 00:12:26.925 fused_ordering(541) 00:12:26.925 fused_ordering(542) 00:12:26.925 fused_ordering(543) 00:12:26.925 fused_ordering(544) 00:12:26.925 fused_ordering(545) 00:12:26.925 fused_ordering(546) 00:12:26.925 fused_ordering(547) 00:12:26.925 fused_ordering(548) 00:12:26.925 fused_ordering(549) 00:12:26.925 fused_ordering(550) 00:12:26.925 fused_ordering(551) 00:12:26.925 fused_ordering(552) 00:12:26.925 fused_ordering(553) 00:12:26.925 fused_ordering(554) 00:12:26.925 fused_ordering(555) 00:12:26.925 fused_ordering(556) 00:12:26.925 fused_ordering(557) 00:12:26.925 fused_ordering(558) 00:12:26.925 fused_ordering(559) 00:12:26.925 fused_ordering(560) 00:12:26.925 fused_ordering(561) 00:12:26.925 fused_ordering(562) 00:12:26.925 fused_ordering(563) 00:12:26.925 fused_ordering(564) 00:12:26.925 fused_ordering(565) 00:12:26.925 fused_ordering(566) 00:12:26.925 fused_ordering(567) 00:12:26.925 fused_ordering(568) 00:12:26.925 fused_ordering(569) 00:12:26.925 fused_ordering(570) 00:12:26.925 fused_ordering(571) 00:12:26.925 fused_ordering(572) 00:12:26.925 fused_ordering(573) 00:12:26.925 fused_ordering(574) 00:12:26.925 fused_ordering(575) 00:12:26.925 fused_ordering(576) 00:12:26.925 fused_ordering(577) 00:12:26.925 fused_ordering(578) 00:12:26.925 fused_ordering(579) 00:12:26.925 fused_ordering(580) 00:12:26.925 fused_ordering(581) 00:12:26.925 fused_ordering(582) 00:12:26.925 fused_ordering(583) 00:12:26.925 fused_ordering(584) 00:12:26.925 fused_ordering(585) 00:12:26.925 fused_ordering(586) 00:12:26.925 fused_ordering(587) 00:12:26.925 fused_ordering(588) 00:12:26.925 fused_ordering(589) 00:12:26.925 fused_ordering(590) 00:12:26.925 fused_ordering(591) 00:12:26.925 fused_ordering(592) 00:12:26.925 fused_ordering(593) 00:12:26.925 fused_ordering(594) 00:12:26.925 fused_ordering(595) 00:12:26.925 fused_ordering(596) 00:12:26.926 fused_ordering(597) 00:12:26.926 fused_ordering(598) 00:12:26.926 fused_ordering(599) 00:12:26.926 fused_ordering(600) 00:12:26.926 fused_ordering(601) 00:12:26.926 fused_ordering(602) 00:12:26.926 fused_ordering(603) 00:12:26.926 fused_ordering(604) 00:12:26.926 fused_ordering(605) 00:12:26.926 fused_ordering(606) 00:12:26.926 fused_ordering(607) 00:12:26.926 fused_ordering(608) 00:12:26.926 fused_ordering(609) 00:12:26.926 fused_ordering(610) 00:12:26.926 fused_ordering(611) 00:12:26.926 fused_ordering(612) 00:12:26.926 fused_ordering(613) 00:12:26.926 fused_ordering(614) 00:12:26.926 fused_ordering(615) 00:12:27.492 fused_ordering(616) 00:12:27.492 fused_ordering(617) 00:12:27.492 fused_ordering(618) 00:12:27.492 fused_ordering(619) 00:12:27.492 fused_ordering(620) 00:12:27.492 fused_ordering(621) 00:12:27.492 fused_ordering(622) 00:12:27.492 fused_ordering(623) 00:12:27.492 fused_ordering(624) 00:12:27.492 fused_ordering(625) 00:12:27.492 fused_ordering(626) 00:12:27.492 fused_ordering(627) 00:12:27.492 fused_ordering(628) 00:12:27.492 fused_ordering(629) 00:12:27.492 fused_ordering(630) 00:12:27.492 fused_ordering(631) 00:12:27.492 fused_ordering(632) 00:12:27.492 fused_ordering(633) 00:12:27.492 fused_ordering(634) 00:12:27.492 fused_ordering(635) 00:12:27.492 fused_ordering(636) 00:12:27.492 fused_ordering(637) 00:12:27.492 fused_ordering(638) 00:12:27.492 fused_ordering(639) 00:12:27.492 fused_ordering(640) 00:12:27.492 fused_ordering(641) 00:12:27.492 fused_ordering(642) 00:12:27.492 fused_ordering(643) 00:12:27.492 fused_ordering(644) 00:12:27.492 fused_ordering(645) 00:12:27.492 fused_ordering(646) 00:12:27.492 fused_ordering(647) 00:12:27.492 fused_ordering(648) 00:12:27.492 fused_ordering(649) 00:12:27.492 fused_ordering(650) 00:12:27.492 fused_ordering(651) 00:12:27.492 fused_ordering(652) 00:12:27.492 fused_ordering(653) 00:12:27.492 fused_ordering(654) 00:12:27.492 fused_ordering(655) 00:12:27.492 fused_ordering(656) 00:12:27.492 fused_ordering(657) 00:12:27.492 fused_ordering(658) 00:12:27.492 fused_ordering(659) 00:12:27.492 fused_ordering(660) 00:12:27.492 fused_ordering(661) 00:12:27.492 fused_ordering(662) 00:12:27.492 fused_ordering(663) 00:12:27.492 fused_ordering(664) 00:12:27.492 fused_ordering(665) 00:12:27.492 fused_ordering(666) 00:12:27.492 fused_ordering(667) 00:12:27.492 fused_ordering(668) 00:12:27.492 fused_ordering(669) 00:12:27.492 fused_ordering(670) 00:12:27.492 fused_ordering(671) 00:12:27.493 fused_ordering(672) 00:12:27.493 fused_ordering(673) 00:12:27.493 fused_ordering(674) 00:12:27.493 fused_ordering(675) 00:12:27.493 fused_ordering(676) 00:12:27.493 fused_ordering(677) 00:12:27.493 fused_ordering(678) 00:12:27.493 fused_ordering(679) 00:12:27.493 fused_ordering(680) 00:12:27.493 fused_ordering(681) 00:12:27.493 fused_ordering(682) 00:12:27.493 fused_ordering(683) 00:12:27.493 fused_ordering(684) 00:12:27.493 fused_ordering(685) 00:12:27.493 fused_ordering(686) 00:12:27.493 fused_ordering(687) 00:12:27.493 fused_ordering(688) 00:12:27.493 fused_ordering(689) 00:12:27.493 fused_ordering(690) 00:12:27.493 fused_ordering(691) 00:12:27.493 fused_ordering(692) 00:12:27.493 fused_ordering(693) 00:12:27.493 fused_ordering(694) 00:12:27.493 fused_ordering(695) 00:12:27.493 fused_ordering(696) 00:12:27.493 fused_ordering(697) 00:12:27.493 fused_ordering(698) 00:12:27.493 fused_ordering(699) 00:12:27.493 fused_ordering(700) 00:12:27.493 fused_ordering(701) 00:12:27.493 fused_ordering(702) 00:12:27.493 fused_ordering(703) 00:12:27.493 fused_ordering(704) 00:12:27.493 fused_ordering(705) 00:12:27.493 fused_ordering(706) 00:12:27.493 fused_ordering(707) 00:12:27.493 fused_ordering(708) 00:12:27.493 fused_ordering(709) 00:12:27.493 fused_ordering(710) 00:12:27.493 fused_ordering(711) 00:12:27.493 fused_ordering(712) 00:12:27.493 fused_ordering(713) 00:12:27.493 fused_ordering(714) 00:12:27.493 fused_ordering(715) 00:12:27.493 fused_ordering(716) 00:12:27.493 fused_ordering(717) 00:12:27.493 fused_ordering(718) 00:12:27.493 fused_ordering(719) 00:12:27.493 fused_ordering(720) 00:12:27.493 fused_ordering(721) 00:12:27.493 fused_ordering(722) 00:12:27.493 fused_ordering(723) 00:12:27.493 fused_ordering(724) 00:12:27.493 fused_ordering(725) 00:12:27.493 fused_ordering(726) 00:12:27.493 fused_ordering(727) 00:12:27.493 fused_ordering(728) 00:12:27.493 fused_ordering(729) 00:12:27.493 fused_ordering(730) 00:12:27.493 fused_ordering(731) 00:12:27.493 fused_ordering(732) 00:12:27.493 fused_ordering(733) 00:12:27.493 fused_ordering(734) 00:12:27.493 fused_ordering(735) 00:12:27.493 fused_ordering(736) 00:12:27.493 fused_ordering(737) 00:12:27.493 fused_ordering(738) 00:12:27.493 fused_ordering(739) 00:12:27.493 fused_ordering(740) 00:12:27.493 fused_ordering(741) 00:12:27.493 fused_ordering(742) 00:12:27.493 fused_ordering(743) 00:12:27.493 fused_ordering(744) 00:12:27.493 fused_ordering(745) 00:12:27.493 fused_ordering(746) 00:12:27.493 fused_ordering(747) 00:12:27.493 fused_ordering(748) 00:12:27.493 fused_ordering(749) 00:12:27.493 fused_ordering(750) 00:12:27.493 fused_ordering(751) 00:12:27.493 fused_ordering(752) 00:12:27.493 fused_ordering(753) 00:12:27.493 fused_ordering(754) 00:12:27.493 fused_ordering(755) 00:12:27.493 fused_ordering(756) 00:12:27.493 fused_ordering(757) 00:12:27.493 fused_ordering(758) 00:12:27.493 fused_ordering(759) 00:12:27.493 fused_ordering(760) 00:12:27.493 fused_ordering(761) 00:12:27.493 fused_ordering(762) 00:12:27.493 fused_ordering(763) 00:12:27.493 fused_ordering(764) 00:12:27.493 fused_ordering(765) 00:12:27.493 fused_ordering(766) 00:12:27.493 fused_ordering(767) 00:12:27.493 fused_ordering(768) 00:12:27.493 fused_ordering(769) 00:12:27.493 fused_ordering(770) 00:12:27.493 fused_ordering(771) 00:12:27.493 fused_ordering(772) 00:12:27.493 fused_ordering(773) 00:12:27.493 fused_ordering(774) 00:12:27.493 fused_ordering(775) 00:12:27.493 fused_ordering(776) 00:12:27.493 fused_ordering(777) 00:12:27.493 fused_ordering(778) 00:12:27.493 fused_ordering(779) 00:12:27.493 fused_ordering(780) 00:12:27.493 fused_ordering(781) 00:12:27.493 fused_ordering(782) 00:12:27.493 fused_ordering(783) 00:12:27.493 fused_ordering(784) 00:12:27.493 fused_ordering(785) 00:12:27.493 fused_ordering(786) 00:12:27.493 fused_ordering(787) 00:12:27.493 fused_ordering(788) 00:12:27.493 fused_ordering(789) 00:12:27.493 fused_ordering(790) 00:12:27.493 fused_ordering(791) 00:12:27.493 fused_ordering(792) 00:12:27.493 fused_ordering(793) 00:12:27.493 fused_ordering(794) 00:12:27.493 fused_ordering(795) 00:12:27.493 fused_ordering(796) 00:12:27.493 fused_ordering(797) 00:12:27.493 fused_ordering(798) 00:12:27.493 fused_ordering(799) 00:12:27.493 fused_ordering(800) 00:12:27.493 fused_ordering(801) 00:12:27.493 fused_ordering(802) 00:12:27.493 fused_ordering(803) 00:12:27.493 fused_ordering(804) 00:12:27.493 fused_ordering(805) 00:12:27.493 fused_ordering(806) 00:12:27.493 fused_ordering(807) 00:12:27.493 fused_ordering(808) 00:12:27.493 fused_ordering(809) 00:12:27.493 fused_ordering(810) 00:12:27.493 fused_ordering(811) 00:12:27.493 fused_ordering(812) 00:12:27.493 fused_ordering(813) 00:12:27.493 fused_ordering(814) 00:12:27.493 fused_ordering(815) 00:12:27.493 fused_ordering(816) 00:12:27.493 fused_ordering(817) 00:12:27.493 fused_ordering(818) 00:12:27.493 fused_ordering(819) 00:12:27.493 fused_ordering(820) 00:12:28.061 fused_ordering(821) 00:12:28.061 fused_ordering(822) 00:12:28.061 fused_ordering(823) 00:12:28.061 fused_ordering(824) 00:12:28.061 fused_ordering(825) 00:12:28.061 fused_ordering(826) 00:12:28.061 fused_ordering(827) 00:12:28.061 fused_ordering(828) 00:12:28.061 fused_ordering(829) 00:12:28.061 fused_ordering(830) 00:12:28.061 fused_ordering(831) 00:12:28.061 fused_ordering(832) 00:12:28.061 fused_ordering(833) 00:12:28.061 fused_ordering(834) 00:12:28.061 fused_ordering(835) 00:12:28.061 fused_ordering(836) 00:12:28.061 fused_ordering(837) 00:12:28.061 fused_ordering(838) 00:12:28.061 fused_ordering(839) 00:12:28.061 fused_ordering(840) 00:12:28.061 fused_ordering(841) 00:12:28.061 fused_ordering(842) 00:12:28.061 fused_ordering(843) 00:12:28.061 fused_ordering(844) 00:12:28.061 fused_ordering(845) 00:12:28.061 fused_ordering(846) 00:12:28.061 fused_ordering(847) 00:12:28.061 fused_ordering(848) 00:12:28.061 fused_ordering(849) 00:12:28.061 fused_ordering(850) 00:12:28.061 fused_ordering(851) 00:12:28.061 fused_ordering(852) 00:12:28.061 fused_ordering(853) 00:12:28.061 fused_ordering(854) 00:12:28.061 fused_ordering(855) 00:12:28.061 fused_ordering(856) 00:12:28.061 fused_ordering(857) 00:12:28.061 fused_ordering(858) 00:12:28.061 fused_ordering(859) 00:12:28.061 fused_ordering(860) 00:12:28.061 fused_ordering(861) 00:12:28.061 fused_ordering(862) 00:12:28.061 fused_ordering(863) 00:12:28.061 fused_ordering(864) 00:12:28.061 fused_ordering(865) 00:12:28.061 fused_ordering(866) 00:12:28.061 fused_ordering(867) 00:12:28.061 fused_ordering(868) 00:12:28.061 fused_ordering(869) 00:12:28.061 fused_ordering(870) 00:12:28.061 fused_ordering(871) 00:12:28.061 fused_ordering(872) 00:12:28.061 fused_ordering(873) 00:12:28.061 fused_ordering(874) 00:12:28.061 fused_ordering(875) 00:12:28.061 fused_ordering(876) 00:12:28.061 fused_ordering(877) 00:12:28.061 fused_ordering(878) 00:12:28.061 fused_ordering(879) 00:12:28.061 fused_ordering(880) 00:12:28.061 fused_ordering(881) 00:12:28.061 fused_ordering(882) 00:12:28.061 fused_ordering(883) 00:12:28.061 fused_ordering(884) 00:12:28.061 fused_ordering(885) 00:12:28.061 fused_ordering(886) 00:12:28.061 fused_ordering(887) 00:12:28.061 fused_ordering(888) 00:12:28.061 fused_ordering(889) 00:12:28.061 fused_ordering(890) 00:12:28.061 fused_ordering(891) 00:12:28.061 fused_ordering(892) 00:12:28.061 fused_ordering(893) 00:12:28.061 fused_ordering(894) 00:12:28.061 fused_ordering(895) 00:12:28.061 fused_ordering(896) 00:12:28.061 fused_ordering(897) 00:12:28.061 fused_ordering(898) 00:12:28.061 fused_ordering(899) 00:12:28.061 fused_ordering(900) 00:12:28.061 fused_ordering(901) 00:12:28.061 fused_ordering(902) 00:12:28.061 fused_ordering(903) 00:12:28.061 fused_ordering(904) 00:12:28.061 fused_ordering(905) 00:12:28.061 fused_ordering(906) 00:12:28.061 fused_ordering(907) 00:12:28.061 fused_ordering(908) 00:12:28.061 fused_ordering(909) 00:12:28.061 fused_ordering(910) 00:12:28.061 fused_ordering(911) 00:12:28.061 fused_ordering(912) 00:12:28.061 fused_ordering(913) 00:12:28.061 fused_ordering(914) 00:12:28.061 fused_ordering(915) 00:12:28.061 fused_ordering(916) 00:12:28.061 fused_ordering(917) 00:12:28.061 fused_ordering(918) 00:12:28.061 fused_ordering(919) 00:12:28.061 fused_ordering(920) 00:12:28.061 fused_ordering(921) 00:12:28.061 fused_ordering(922) 00:12:28.061 fused_ordering(923) 00:12:28.061 fused_ordering(924) 00:12:28.061 fused_ordering(925) 00:12:28.061 fused_ordering(926) 00:12:28.061 fused_ordering(927) 00:12:28.061 fused_ordering(928) 00:12:28.061 fused_ordering(929) 00:12:28.061 fused_ordering(930) 00:12:28.061 fused_ordering(931) 00:12:28.061 fused_ordering(932) 00:12:28.061 fused_ordering(933) 00:12:28.061 fused_ordering(934) 00:12:28.061 fused_ordering(935) 00:12:28.061 fused_ordering(936) 00:12:28.061 fused_ordering(937) 00:12:28.061 fused_ordering(938) 00:12:28.061 fused_ordering(939) 00:12:28.061 fused_ordering(940) 00:12:28.061 fused_ordering(941) 00:12:28.061 fused_ordering(942) 00:12:28.061 fused_ordering(943) 00:12:28.061 fused_ordering(944) 00:12:28.061 fused_ordering(945) 00:12:28.061 fused_ordering(946) 00:12:28.061 fused_ordering(947) 00:12:28.061 fused_ordering(948) 00:12:28.061 fused_ordering(949) 00:12:28.061 fused_ordering(950) 00:12:28.061 fused_ordering(951) 00:12:28.061 fused_ordering(952) 00:12:28.061 fused_ordering(953) 00:12:28.061 fused_ordering(954) 00:12:28.061 fused_ordering(955) 00:12:28.061 fused_ordering(956) 00:12:28.061 fused_ordering(957) 00:12:28.061 fused_ordering(958) 00:12:28.061 fused_ordering(959) 00:12:28.061 fused_ordering(960) 00:12:28.061 fused_ordering(961) 00:12:28.061 fused_ordering(962) 00:12:28.061 fused_ordering(963) 00:12:28.061 fused_ordering(964) 00:12:28.061 fused_ordering(965) 00:12:28.061 fused_ordering(966) 00:12:28.061 fused_ordering(967) 00:12:28.061 fused_ordering(968) 00:12:28.061 fused_ordering(969) 00:12:28.061 fused_ordering(970) 00:12:28.061 fused_ordering(971) 00:12:28.061 fused_ordering(972) 00:12:28.061 fused_ordering(973) 00:12:28.061 fused_ordering(974) 00:12:28.061 fused_ordering(975) 00:12:28.061 fused_ordering(976) 00:12:28.061 fused_ordering(977) 00:12:28.061 fused_ordering(978) 00:12:28.061 fused_ordering(979) 00:12:28.061 fused_ordering(980) 00:12:28.061 fused_ordering(981) 00:12:28.061 fused_ordering(982) 00:12:28.061 fused_ordering(983) 00:12:28.061 fused_ordering(984) 00:12:28.061 fused_ordering(985) 00:12:28.061 fused_ordering(986) 00:12:28.061 fused_ordering(987) 00:12:28.061 fused_ordering(988) 00:12:28.061 fused_ordering(989) 00:12:28.061 fused_ordering(990) 00:12:28.061 fused_ordering(991) 00:12:28.061 fused_ordering(992) 00:12:28.061 fused_ordering(993) 00:12:28.061 fused_ordering(994) 00:12:28.061 fused_ordering(995) 00:12:28.061 fused_ordering(996) 00:12:28.061 fused_ordering(997) 00:12:28.061 fused_ordering(998) 00:12:28.061 fused_ordering(999) 00:12:28.061 fused_ordering(1000) 00:12:28.061 fused_ordering(1001) 00:12:28.061 fused_ordering(1002) 00:12:28.061 fused_ordering(1003) 00:12:28.061 fused_ordering(1004) 00:12:28.061 fused_ordering(1005) 00:12:28.061 fused_ordering(1006) 00:12:28.061 fused_ordering(1007) 00:12:28.061 fused_ordering(1008) 00:12:28.061 fused_ordering(1009) 00:12:28.061 fused_ordering(1010) 00:12:28.061 fused_ordering(1011) 00:12:28.061 fused_ordering(1012) 00:12:28.061 fused_ordering(1013) 00:12:28.061 fused_ordering(1014) 00:12:28.061 fused_ordering(1015) 00:12:28.061 fused_ordering(1016) 00:12:28.061 fused_ordering(1017) 00:12:28.061 fused_ordering(1018) 00:12:28.061 fused_ordering(1019) 00:12:28.061 fused_ordering(1020) 00:12:28.061 fused_ordering(1021) 00:12:28.061 fused_ordering(1022) 00:12:28.061 fused_ordering(1023) 00:12:28.061 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:12:28.061 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:12:28.061 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # nvmfcleanup 00:12:28.061 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@99 -- # sync 00:12:28.061 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:12:28.061 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # set +e 00:12:28.061 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # for i in {1..20} 00:12:28.061 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:12:28.061 rmmod nvme_tcp 00:12:28.061 rmmod nvme_fabrics 00:12:28.062 rmmod nvme_keyring 00:12:28.062 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:12:28.062 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # set -e 00:12:28.062 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # return 0 00:12:28.062 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # '[' -n 2284720 ']' 00:12:28.062 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@337 -- # killprocess 2284720 00:12:28.062 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 2284720 ']' 00:12:28.062 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 2284720 00:12:28.062 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:12:28.062 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:28.062 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2284720 00:12:28.062 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:28.062 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:28.062 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2284720' 00:12:28.062 killing process with pid 2284720 00:12:28.062 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 2284720 00:12:28.062 08:56:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 2284720 00:12:28.062 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:12:28.062 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # nvmf_fini 00:12:28.062 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@264 -- # local dev 00:12:28.062 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@267 -- # remove_target_ns 00:12:28.062 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:28.062 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:28.062 08:56:44 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@268 -- # delete_main_bridge 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@130 -- # return 0 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@41 -- # _dev=0 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@41 -- # dev_map=() 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@284 -- # iptr 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@542 -- # iptables-save 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@542 -- # iptables-restore 00:12:30.600 00:12:30.600 real 0m10.949s 00:12:30.600 user 0m5.304s 00:12:30.600 sys 0m5.912s 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:12:30.600 ************************************ 00:12:30.600 END TEST nvmf_fused_ordering 00:12:30.600 ************************************ 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:30.600 ************************************ 00:12:30.600 START TEST nvmf_ns_masking 00:12:30.600 ************************************ 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:12:30.600 * Looking for test storage... 00:12:30.600 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:30.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.600 --rc genhtml_branch_coverage=1 00:12:30.600 --rc genhtml_function_coverage=1 00:12:30.600 --rc genhtml_legend=1 00:12:30.600 --rc geninfo_all_blocks=1 00:12:30.600 --rc geninfo_unexecuted_blocks=1 00:12:30.600 00:12:30.600 ' 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:30.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.600 --rc genhtml_branch_coverage=1 00:12:30.600 --rc genhtml_function_coverage=1 00:12:30.600 --rc genhtml_legend=1 00:12:30.600 --rc geninfo_all_blocks=1 00:12:30.600 --rc geninfo_unexecuted_blocks=1 00:12:30.600 00:12:30.600 ' 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:30.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.600 --rc genhtml_branch_coverage=1 00:12:30.600 --rc genhtml_function_coverage=1 00:12:30.600 --rc genhtml_legend=1 00:12:30.600 --rc geninfo_all_blocks=1 00:12:30.600 --rc geninfo_unexecuted_blocks=1 00:12:30.600 00:12:30.600 ' 00:12:30.600 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:30.600 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:30.600 --rc genhtml_branch_coverage=1 00:12:30.601 --rc genhtml_function_coverage=1 00:12:30.601 --rc genhtml_legend=1 00:12:30.601 --rc geninfo_all_blocks=1 00:12:30.601 --rc geninfo_unexecuted_blocks=1 00:12:30.601 00:12:30.601 ' 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@50 -- # : 0 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:30.601 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=09cee773-5996-4d36-b143-38a5d4268224 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=1560d556-f009-4097-8b7f-55dd410b8336 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=a5163c36-6d5e-4a67-b516-dcc5f454d51a 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # prepare_net_devs 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # local -g is_hw=no 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # remove_target_ns 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # xtrace_disable 00:12:30.601 08:56:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@131 -- # pci_devs=() 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@131 -- # local -a pci_devs 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@132 -- # pci_net_devs=() 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@133 -- # pci_drivers=() 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@133 -- # local -A pci_drivers 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@135 -- # net_devs=() 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@135 -- # local -ga net_devs 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@136 -- # e810=() 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@136 -- # local -ga e810 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@137 -- # x722=() 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@137 -- # local -ga x722 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@138 -- # mlx=() 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@138 -- # local -ga mlx 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:37.174 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:37.174 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:37.174 Found net devices under 0000:86:00.0: cvl_0_0 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:37.174 Found net devices under 0000:86:00.1: cvl_0_1 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # is_hw=yes 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@257 -- # create_target_ns 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@27 -- # local -gA dev_map 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@28 -- # local -g _dev 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@44 -- # ips=() 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:12:37.174 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@11 -- # local val=167772161 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:12:37.175 10.0.0.1 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@11 -- # local val=167772162 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:12:37.175 10.0.0.2 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@38 -- # ping_ips 1 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # local dev=initiator0 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:12:37.175 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:37.175 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.423 ms 00:12:37.175 00:12:37.175 --- 10.0.0.1 ping statistics --- 00:12:37.175 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.175 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # local dev=target0 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:12:37.175 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:12:37.175 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:37.175 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:12:37.175 00:12:37.175 --- 10.0.0.2 ping statistics --- 00:12:37.176 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:37.176 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # (( pair++ )) 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # return 0 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # local dev=initiator0 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # local dev=initiator1 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # return 1 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # dev= 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@169 -- # return 0 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # local dev=target0 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # get_net_dev target1 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # local dev=target1 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # return 1 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # dev= 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@169 -- # return 0 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # nvmfpid=2288735 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # waitforlisten 2288735 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2288735 ']' 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:37.176 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:37.176 [2024-11-20 08:56:52.621758] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:12:37.176 [2024-11-20 08:56:52.621808] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:37.176 [2024-11-20 08:56:52.702854] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.176 [2024-11-20 08:56:52.742545] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:37.176 [2024-11-20 08:56:52.742592] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:37.176 [2024-11-20 08:56:52.742599] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:37.176 [2024-11-20 08:56:52.742606] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:37.176 [2024-11-20 08:56:52.742611] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:37.176 [2024-11-20 08:56:52.743191] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:37.176 08:56:52 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:12:37.176 [2024-11-20 08:56:53.050718] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:37.177 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:12:37.177 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:12:37.177 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:37.436 Malloc1 00:12:37.436 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:37.693 Malloc2 00:12:37.693 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:37.693 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:12:37.951 08:56:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:38.209 [2024-11-20 08:56:54.080370] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:38.209 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:12:38.209 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a5163c36-6d5e-4a67-b516-dcc5f454d51a -a 10.0.0.2 -s 4420 -i 4 00:12:38.468 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:12:38.468 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:38.468 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:38.468 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:38.468 08:56:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:40.371 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:40.371 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:40.371 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.371 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:40.371 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.371 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:40.371 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:40.371 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:40.371 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:40.371 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:40.371 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:12:40.371 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:40.371 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:40.371 [ 0]:0x1 00:12:40.371 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:40.371 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:40.630 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=23e326ed7e3e400aad5aaa4955fa6bad 00:12:40.630 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 23e326ed7e3e400aad5aaa4955fa6bad != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:40.630 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:12:40.630 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:12:40.630 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:40.630 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:40.630 [ 0]:0x1 00:12:40.630 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:40.630 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:40.889 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=23e326ed7e3e400aad5aaa4955fa6bad 00:12:40.889 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 23e326ed7e3e400aad5aaa4955fa6bad != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:40.889 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:12:40.889 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:40.889 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:40.889 [ 1]:0x2 00:12:40.889 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:40.889 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:40.889 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0a0432ff6869482886817a1f25970f33 00:12:40.889 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0a0432ff6869482886817a1f25970f33 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:40.889 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:12:40.890 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.890 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.890 08:56:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:41.148 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:12:41.406 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:12:41.406 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a5163c36-6d5e-4a67-b516-dcc5f454d51a -a 10.0.0.2 -s 4420 -i 4 00:12:41.406 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:12:41.406 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:41.406 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:41.406 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:12:41.406 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:12:41.406 08:56:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:43.312 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:43.312 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:43.312 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:43.576 [ 0]:0x2 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0a0432ff6869482886817a1f25970f33 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0a0432ff6869482886817a1f25970f33 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.576 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:43.835 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:43.835 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.835 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:43.835 [ 0]:0x1 00:12:43.835 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:43.835 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.835 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=23e326ed7e3e400aad5aaa4955fa6bad 00:12:43.835 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 23e326ed7e3e400aad5aaa4955fa6bad != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.835 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:43.835 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:43.835 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:43.835 [ 1]:0x2 00:12:43.835 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:43.835 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:43.835 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0a0432ff6869482886817a1f25970f33 00:12:43.835 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0a0432ff6869482886817a1f25970f33 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:43.835 08:56:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:44.094 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:44.094 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:44.094 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:44.094 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:44.094 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:44.094 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:44.094 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:44.094 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:44.094 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:44.094 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:44.094 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:44.094 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:44.094 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:44.094 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:44.094 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:44.094 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:44.094 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:44.094 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:44.094 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:44.094 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:44.094 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:44.094 [ 0]:0x2 00:12:44.094 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:44.094 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:44.094 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0a0432ff6869482886817a1f25970f33 00:12:44.094 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0a0432ff6869482886817a1f25970f33 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:44.094 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:44.094 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:44.094 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.352 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:44.352 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:44.352 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I a5163c36-6d5e-4a67-b516-dcc5f454d51a -a 10.0.0.2 -s 4420 -i 4 00:12:44.610 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:44.610 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:12:44.610 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:44.610 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:12:44.610 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:12:44.610 08:57:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:12:46.508 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:46.508 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:46.508 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:46.508 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:12:46.508 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:46.508 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:12:46.508 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:46.508 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:46.766 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:46.766 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:46.766 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:46.766 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:46.766 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:46.766 [ 0]:0x1 00:12:46.766 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:46.766 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:46.766 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=23e326ed7e3e400aad5aaa4955fa6bad 00:12:46.766 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 23e326ed7e3e400aad5aaa4955fa6bad != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:46.766 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:46.766 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:46.766 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:46.766 [ 1]:0x2 00:12:46.766 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:46.766 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:46.766 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0a0432ff6869482886817a1f25970f33 00:12:46.766 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0a0432ff6869482886817a1f25970f33 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:46.766 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:47.025 [ 0]:0x2 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0a0432ff6869482886817a1f25970f33 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0a0432ff6869482886817a1f25970f33 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:47.025 08:57:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:47.283 [2024-11-20 08:57:03.170600] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:47.283 request: 00:12:47.283 { 00:12:47.283 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:47.283 "nsid": 2, 00:12:47.283 "host": "nqn.2016-06.io.spdk:host1", 00:12:47.283 "method": "nvmf_ns_remove_host", 00:12:47.283 "req_id": 1 00:12:47.283 } 00:12:47.283 Got JSON-RPC error response 00:12:47.283 response: 00:12:47.283 { 00:12:47.283 "code": -32602, 00:12:47.283 "message": "Invalid parameters" 00:12:47.283 } 00:12:47.283 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:47.283 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:47.283 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:47.283 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:47.283 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:47.283 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:47.283 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:12:47.283 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:12:47.283 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:47.283 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:12:47.283 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:47.283 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:12:47.283 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:47.283 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:47.283 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:47.283 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:47.283 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:47.283 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:47.284 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:47.284 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:47.284 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:47.284 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:47.284 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:47.284 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:47.284 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:47.284 [ 0]:0x2 00:12:47.284 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:47.284 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:47.543 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=0a0432ff6869482886817a1f25970f33 00:12:47.543 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 0a0432ff6869482886817a1f25970f33 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:47.543 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:47.543 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:47.543 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:47.543 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2290672 00:12:47.543 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:47.543 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2290672 /var/tmp/host.sock 00:12:47.543 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:47.543 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 2290672 ']' 00:12:47.543 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:12:47.543 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:47.543 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:47.543 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:47.543 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:47.543 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:47.543 [2024-11-20 08:57:03.547372] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:12:47.543 [2024-11-20 08:57:03.547419] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2290672 ] 00:12:47.802 [2024-11-20 08:57:03.622823] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:47.802 [2024-11-20 08:57:03.664302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:48.059 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:48.059 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:12:48.059 08:57:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:48.059 08:57:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:48.317 08:57:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 09cee773-5996-4d36-b143-38a5d4268224 00:12:48.317 08:57:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@538 -- # tr -d - 00:12:48.317 08:57:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 09CEE77359964D36B14338A5D4268224 -i 00:12:48.574 08:57:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 1560d556-f009-4097-8b7f-55dd410b8336 00:12:48.574 08:57:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@538 -- # tr -d - 00:12:48.575 08:57:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 1560D556F00940978B7F55DD410B8336 -i 00:12:48.832 08:57:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:49.091 08:57:04 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:49.091 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:49.091 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:49.348 nvme0n1 00:12:49.348 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:49.348 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:49.606 nvme1n2 00:12:49.606 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:49.606 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:49.606 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:49.606 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:49.606 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:49.864 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:49.864 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:49.864 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:49.864 08:57:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:50.122 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 09cee773-5996-4d36-b143-38a5d4268224 == \0\9\c\e\e\7\7\3\-\5\9\9\6\-\4\d\3\6\-\b\1\4\3\-\3\8\a\5\d\4\2\6\8\2\2\4 ]] 00:12:50.122 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:50.122 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:50.122 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:50.381 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 1560d556-f009-4097-8b7f-55dd410b8336 == \1\5\6\0\d\5\5\6\-\f\0\0\9\-\4\0\9\7\-\8\b\7\f\-\5\5\d\d\4\1\0\b\8\3\3\6 ]] 00:12:50.381 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:50.641 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:50.641 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid 09cee773-5996-4d36-b143-38a5d4268224 00:12:50.641 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@538 -- # tr -d - 00:12:50.641 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 09CEE77359964D36B14338A5D4268224 00:12:50.641 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:12:50.641 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 09CEE77359964D36B14338A5D4268224 00:12:50.641 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:50.641 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:50.641 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:50.641 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:50.641 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:50.641 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:50.641 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:50.641 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:50.641 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g 09CEE77359964D36B14338A5D4268224 00:12:50.900 [2024-11-20 08:57:06.808712] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:12:50.900 [2024-11-20 08:57:06.808746] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:12:50.900 [2024-11-20 08:57:06.808755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:12:50.900 request: 00:12:50.900 { 00:12:50.900 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:50.900 "namespace": { 00:12:50.900 "bdev_name": "invalid", 00:12:50.900 "nsid": 1, 00:12:50.900 "nguid": "09CEE77359964D36B14338A5D4268224", 00:12:50.900 "no_auto_visible": false 00:12:50.900 }, 00:12:50.900 "method": "nvmf_subsystem_add_ns", 00:12:50.900 "req_id": 1 00:12:50.900 } 00:12:50.900 Got JSON-RPC error response 00:12:50.900 response: 00:12:50.900 { 00:12:50.900 "code": -32602, 00:12:50.900 "message": "Invalid parameters" 00:12:50.900 } 00:12:50.900 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:12:50.900 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:50.900 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:50.900 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:50.900 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid 09cee773-5996-4d36-b143-38a5d4268224 00:12:50.900 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@538 -- # tr -d - 00:12:50.900 08:57:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 09CEE77359964D36B14338A5D4268224 -i 00:12:51.158 08:57:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:12:53.062 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:12:53.062 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:12:53.062 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:53.322 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:12:53.322 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 2290672 00:12:53.322 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2290672 ']' 00:12:53.322 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2290672 00:12:53.322 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:53.322 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:53.322 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2290672 00:12:53.322 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:12:53.322 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:12:53.322 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2290672' 00:12:53.322 killing process with pid 2290672 00:12:53.322 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2290672 00:12:53.322 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2290672 00:12:53.585 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:53.845 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:12:53.845 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:12:53.845 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # nvmfcleanup 00:12:53.845 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@99 -- # sync 00:12:53.845 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:12:53.845 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # set +e 00:12:53.845 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # for i in {1..20} 00:12:53.845 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:12:53.845 rmmod nvme_tcp 00:12:53.845 rmmod nvme_fabrics 00:12:53.846 rmmod nvme_keyring 00:12:53.846 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:12:53.846 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # set -e 00:12:53.846 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # return 0 00:12:53.846 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # '[' -n 2288735 ']' 00:12:53.846 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@337 -- # killprocess 2288735 00:12:53.846 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 2288735 ']' 00:12:53.846 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 2288735 00:12:53.846 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:12:53.846 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:53.846 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2288735 00:12:54.105 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:54.105 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:54.105 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2288735' 00:12:54.105 killing process with pid 2288735 00:12:54.105 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 2288735 00:12:54.105 08:57:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 2288735 00:12:54.105 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:12:54.105 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # nvmf_fini 00:12:54.105 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@264 -- # local dev 00:12:54.105 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@267 -- # remove_target_ns 00:12:54.105 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:54.105 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:54.105 08:57:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:56.742 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@268 -- # delete_main_bridge 00:12:56.742 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:56.742 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@130 -- # return 0 00:12:56.742 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:56.742 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:12:56.742 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:56.742 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:12:56.742 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:12:56.742 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:56.742 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:12:56.742 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:12:56.742 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:56.742 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:12:56.742 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:56.742 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:12:56.742 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:12:56.742 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:56.742 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:12:56.742 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:12:56.742 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:12:56.742 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@41 -- # _dev=0 00:12:56.742 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@41 -- # dev_map=() 00:12:56.742 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@284 -- # iptr 00:12:56.742 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@542 -- # iptables-save 00:12:56.742 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:12:56.742 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@542 -- # iptables-restore 00:12:56.742 00:12:56.742 real 0m25.969s 00:12:56.742 user 0m30.812s 00:12:56.742 sys 0m7.251s 00:12:56.742 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:56.742 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:56.743 ************************************ 00:12:56.743 END TEST nvmf_ns_masking 00:12:56.743 ************************************ 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:56.743 ************************************ 00:12:56.743 START TEST nvmf_nvme_cli 00:12:56.743 ************************************ 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:56.743 * Looking for test storage... 00:12:56.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:56.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.743 --rc genhtml_branch_coverage=1 00:12:56.743 --rc genhtml_function_coverage=1 00:12:56.743 --rc genhtml_legend=1 00:12:56.743 --rc geninfo_all_blocks=1 00:12:56.743 --rc geninfo_unexecuted_blocks=1 00:12:56.743 00:12:56.743 ' 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:56.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.743 --rc genhtml_branch_coverage=1 00:12:56.743 --rc genhtml_function_coverage=1 00:12:56.743 --rc genhtml_legend=1 00:12:56.743 --rc geninfo_all_blocks=1 00:12:56.743 --rc geninfo_unexecuted_blocks=1 00:12:56.743 00:12:56.743 ' 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:56.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.743 --rc genhtml_branch_coverage=1 00:12:56.743 --rc genhtml_function_coverage=1 00:12:56.743 --rc genhtml_legend=1 00:12:56.743 --rc geninfo_all_blocks=1 00:12:56.743 --rc geninfo_unexecuted_blocks=1 00:12:56.743 00:12:56.743 ' 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:56.743 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:56.743 --rc genhtml_branch_coverage=1 00:12:56.743 --rc genhtml_function_coverage=1 00:12:56.743 --rc genhtml_legend=1 00:12:56.743 --rc geninfo_all_blocks=1 00:12:56.743 --rc geninfo_unexecuted_blocks=1 00:12:56.743 00:12:56.743 ' 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:56.743 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:56.744 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:56.744 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@50 -- # : 0 00:12:56.744 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:56.744 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:56.744 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:56.744 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:56.744 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:56.744 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:56.744 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:56.744 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:56.744 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:56.744 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:56.744 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:56.744 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:56.744 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:56.744 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:56.744 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:12:56.744 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:56.744 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # prepare_net_devs 00:12:56.744 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # local -g is_hw=no 00:12:56.744 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # remove_target_ns 00:12:56.744 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:56.744 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:56.744 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:56.744 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:12:56.744 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:12:56.744 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # xtrace_disable 00:12:56.744 08:57:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:03.317 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:03.317 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@131 -- # pci_devs=() 00:13:03.317 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@131 -- # local -a pci_devs 00:13:03.317 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@132 -- # pci_net_devs=() 00:13:03.317 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:13:03.317 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@133 -- # pci_drivers=() 00:13:03.317 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@133 -- # local -A pci_drivers 00:13:03.317 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@135 -- # net_devs=() 00:13:03.317 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@135 -- # local -ga net_devs 00:13:03.317 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@136 -- # e810=() 00:13:03.317 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@136 -- # local -ga e810 00:13:03.317 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@137 -- # x722=() 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@137 -- # local -ga x722 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@138 -- # mlx=() 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@138 -- # local -ga mlx 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:03.318 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:03.318 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:03.318 Found net devices under 0000:86:00.0: cvl_0_0 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:03.318 Found net devices under 0000:86:00.1: cvl_0_1 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # is_hw=yes 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@257 -- # create_target_ns 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@27 -- # local -gA dev_map 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@28 -- # local -g _dev 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@44 -- # ips=() 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:13:03.318 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@11 -- # local val=167772161 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:13:03.319 10.0.0.1 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@11 -- # local val=167772162 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:13:03.319 10.0.0.2 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@38 -- # ping_ips 1 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@107 -- # local dev=initiator0 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:13:03.319 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:03.319 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.404 ms 00:13:03.319 00:13:03.319 --- 10.0.0.1 ping statistics --- 00:13:03.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.319 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@107 -- # local dev=target0 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:13:03.319 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:03.319 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.172 ms 00:13:03.319 00:13:03.319 --- 10.0.0.2 ping statistics --- 00:13:03.319 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:03.319 rtt min/avg/max/mdev = 0.172/0.172/0.172/0.000 ms 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # (( pair++ )) 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # return 0 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:13:03.319 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@107 -- # local dev=initiator0 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@107 -- # local dev=initiator1 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # return 1 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # dev= 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@169 -- # return 0 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@107 -- # local dev=target0 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # get_net_dev target1 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@107 -- # local dev=target1 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # return 1 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # dev= 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@169 -- # return 0 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # nvmfpid=2295806 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # waitforlisten 2295806 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 2295806 ']' 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:03.320 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:03.320 [2024-11-20 08:57:18.649346] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:13:03.320 [2024-11-20 08:57:18.649388] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:03.320 [2024-11-20 08:57:18.728792] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:03.320 [2024-11-20 08:57:18.772317] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:03.320 [2024-11-20 08:57:18.772357] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:03.320 [2024-11-20 08:57:18.772364] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:03.320 [2024-11-20 08:57:18.772369] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:03.320 [2024-11-20 08:57:18.772375] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:03.320 [2024-11-20 08:57:18.773976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:03.320 [2024-11-20 08:57:18.774042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:03.320 [2024-11-20 08:57:18.774155] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:03.320 [2024-11-20 08:57:18.774156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:03.320 [2024-11-20 08:57:18.915630] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:03.320 Malloc0 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:03.320 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.321 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:03.321 Malloc1 00:13:03.321 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.321 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:03.321 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.321 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:03.321 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.321 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:03.321 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.321 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:03.321 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.321 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:03.321 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.321 08:57:18 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:03.321 08:57:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.321 08:57:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:03.321 08:57:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.321 08:57:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:03.321 [2024-11-20 08:57:19.014609] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:03.321 08:57:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.321 08:57:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:03.321 08:57:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.321 08:57:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:03.321 08:57:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.321 08:57:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:13:03.321 00:13:03.321 Discovery Log Number of Records 2, Generation counter 2 00:13:03.321 =====Discovery Log Entry 0====== 00:13:03.321 trtype: tcp 00:13:03.321 adrfam: ipv4 00:13:03.321 subtype: current discovery subsystem 00:13:03.321 treq: not required 00:13:03.321 portid: 0 00:13:03.321 trsvcid: 4420 00:13:03.321 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:13:03.321 traddr: 10.0.0.2 00:13:03.321 eflags: explicit discovery connections, duplicate discovery information 00:13:03.321 sectype: none 00:13:03.321 =====Discovery Log Entry 1====== 00:13:03.321 trtype: tcp 00:13:03.321 adrfam: ipv4 00:13:03.321 subtype: nvme subsystem 00:13:03.321 treq: not required 00:13:03.321 portid: 0 00:13:03.321 trsvcid: 4420 00:13:03.321 subnqn: nqn.2016-06.io.spdk:cnode1 00:13:03.321 traddr: 10.0.0.2 00:13:03.321 eflags: none 00:13:03.321 sectype: none 00:13:03.321 08:57:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:13:03.321 08:57:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:13:03.321 08:57:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # local dev _ 00:13:03.321 08:57:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:13:03.321 08:57:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # nvme list 00:13:03.321 08:57:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ Node == /dev/nvme* ]] 00:13:03.321 08:57:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:13:03.321 08:57:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ --------------------- == /dev/nvme* ]] 00:13:03.321 08:57:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:13:03.321 08:57:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:13:03.321 08:57:19 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:04.696 08:57:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:04.696 08:57:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:13:04.696 08:57:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:04.696 08:57:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:04.696 08:57:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:04.696 08:57:20 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:13:06.595 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:06.595 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:06.595 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.595 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:06.595 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.595 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:13:06.595 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:13:06.595 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # local dev _ 00:13:06.595 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:13:06.595 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # nvme list 00:13:06.595 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ Node == /dev/nvme* ]] 00:13:06.595 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:13:06.595 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ --------------------- == /dev/nvme* ]] 00:13:06.595 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:13:06.595 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:06.595 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n1 00:13:06.595 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n2 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:13:06.596 /dev/nvme0n2 ]] 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # local dev _ 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # nvme list 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ Node == /dev/nvme* ]] 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ --------------------- == /dev/nvme* ]] 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n1 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n2 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:06.596 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # nvmfcleanup 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@99 -- # sync 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # set +e 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:13:06.596 rmmod nvme_tcp 00:13:06.596 rmmod nvme_fabrics 00:13:06.596 rmmod nvme_keyring 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # set -e 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # return 0 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # '[' -n 2295806 ']' 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@337 -- # killprocess 2295806 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 2295806 ']' 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 2295806 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:06.596 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2295806 00:13:06.855 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:06.855 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:06.855 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2295806' 00:13:06.855 killing process with pid 2295806 00:13:06.855 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 2295806 00:13:06.855 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 2295806 00:13:06.855 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:13:06.855 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # nvmf_fini 00:13:06.855 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@264 -- # local dev 00:13:06.855 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@267 -- # remove_target_ns 00:13:06.855 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:06.855 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:06.855 08:57:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:09.392 08:57:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@268 -- # delete_main_bridge 00:13:09.392 08:57:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:09.392 08:57:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@130 -- # return 0 00:13:09.392 08:57:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:09.392 08:57:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:13:09.392 08:57:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:13:09.392 08:57:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:13:09.392 08:57:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:13:09.392 08:57:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:13:09.392 08:57:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:13:09.392 08:57:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:13:09.392 08:57:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:09.392 08:57:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:13:09.392 08:57:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:13:09.392 08:57:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:13:09.392 08:57:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:13:09.392 08:57:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:13:09.392 08:57:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:13:09.392 08:57:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:13:09.392 08:57:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:13:09.392 08:57:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@41 -- # _dev=0 00:13:09.392 08:57:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@41 -- # dev_map=() 00:13:09.392 08:57:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@284 -- # iptr 00:13:09.392 08:57:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@542 -- # iptables-save 00:13:09.392 08:57:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:13:09.392 08:57:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@542 -- # iptables-restore 00:13:09.392 00:13:09.392 real 0m12.673s 00:13:09.392 user 0m18.024s 00:13:09.392 sys 0m5.236s 00:13:09.392 08:57:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:09.393 08:57:24 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:09.393 ************************************ 00:13:09.393 END TEST nvmf_nvme_cli 00:13:09.393 ************************************ 00:13:09.393 08:57:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:13:09.393 08:57:24 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:09.393 08:57:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:09.393 08:57:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:09.393 08:57:24 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:09.393 ************************************ 00:13:09.393 START TEST nvmf_vfio_user 00:13:09.393 ************************************ 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:13:09.393 * Looking for test storage... 00:13:09.393 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:09.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.393 --rc genhtml_branch_coverage=1 00:13:09.393 --rc genhtml_function_coverage=1 00:13:09.393 --rc genhtml_legend=1 00:13:09.393 --rc geninfo_all_blocks=1 00:13:09.393 --rc geninfo_unexecuted_blocks=1 00:13:09.393 00:13:09.393 ' 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:09.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.393 --rc genhtml_branch_coverage=1 00:13:09.393 --rc genhtml_function_coverage=1 00:13:09.393 --rc genhtml_legend=1 00:13:09.393 --rc geninfo_all_blocks=1 00:13:09.393 --rc geninfo_unexecuted_blocks=1 00:13:09.393 00:13:09.393 ' 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:09.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.393 --rc genhtml_branch_coverage=1 00:13:09.393 --rc genhtml_function_coverage=1 00:13:09.393 --rc genhtml_legend=1 00:13:09.393 --rc geninfo_all_blocks=1 00:13:09.393 --rc geninfo_unexecuted_blocks=1 00:13:09.393 00:13:09.393 ' 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:09.393 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:09.393 --rc genhtml_branch_coverage=1 00:13:09.393 --rc genhtml_function_coverage=1 00:13:09.393 --rc genhtml_legend=1 00:13:09.393 --rc geninfo_all_blocks=1 00:13:09.393 --rc geninfo_unexecuted_blocks=1 00:13:09.393 00:13:09.393 ' 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:09.393 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@50 -- # : 0 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:09.394 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2297091 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2297091' 00:13:09.394 Process pid: 2297091 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2297091 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2297091 ']' 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:09.394 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:09.394 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:09.394 [2024-11-20 08:57:25.281879] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:13:09.394 [2024-11-20 08:57:25.281927] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:09.394 [2024-11-20 08:57:25.356631] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:09.394 [2024-11-20 08:57:25.397516] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:09.394 [2024-11-20 08:57:25.397554] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:09.394 [2024-11-20 08:57:25.397562] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:09.394 [2024-11-20 08:57:25.397568] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:09.394 [2024-11-20 08:57:25.397574] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:09.394 [2024-11-20 08:57:25.399059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:09.394 [2024-11-20 08:57:25.399165] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:09.394 [2024-11-20 08:57:25.399250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:09.394 [2024-11-20 08:57:25.399251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:09.651 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:09.651 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:09.651 08:57:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:10.582 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:13:10.840 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:10.840 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:10.840 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:10.840 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:10.840 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:11.097 Malloc1 00:13:11.097 08:57:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:11.355 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:11.355 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:11.612 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:11.612 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:11.612 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:11.869 Malloc2 00:13:11.869 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:12.126 08:57:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:12.126 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:12.384 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:13:12.384 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:13:12.384 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:12.384 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:12.384 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:13:12.384 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:12.384 [2024-11-20 08:57:28.377901] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:13:12.384 [2024-11-20 08:57:28.377932] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2297581 ] 00:13:12.384 [2024-11-20 08:57:28.416860] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:13:12.384 [2024-11-20 08:57:28.421171] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:12.384 [2024-11-20 08:57:28.421191] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f4ebaaa0000 00:13:12.384 [2024-11-20 08:57:28.422182] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:12.384 [2024-11-20 08:57:28.423170] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:12.643 [2024-11-20 08:57:28.424173] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:12.643 [2024-11-20 08:57:28.425182] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:12.643 [2024-11-20 08:57:28.426184] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:12.643 [2024-11-20 08:57:28.427182] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:12.643 [2024-11-20 08:57:28.428192] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:12.643 [2024-11-20 08:57:28.429193] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:12.643 [2024-11-20 08:57:28.430209] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:12.643 [2024-11-20 08:57:28.430218] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f4ebaa95000 00:13:12.643 [2024-11-20 08:57:28.431159] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:12.643 [2024-11-20 08:57:28.444781] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:13:12.643 [2024-11-20 08:57:28.444810] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:13:12.643 [2024-11-20 08:57:28.447309] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:12.643 [2024-11-20 08:57:28.447344] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:12.643 [2024-11-20 08:57:28.447408] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:13:12.643 [2024-11-20 08:57:28.447421] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:13:12.643 [2024-11-20 08:57:28.447427] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:13:12.643 [2024-11-20 08:57:28.448308] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:13:12.643 [2024-11-20 08:57:28.448316] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:13:12.643 [2024-11-20 08:57:28.448322] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:13:12.643 [2024-11-20 08:57:28.449312] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:13:12.643 [2024-11-20 08:57:28.449323] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:13:12.643 [2024-11-20 08:57:28.449330] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:12.643 [2024-11-20 08:57:28.450315] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:13:12.643 [2024-11-20 08:57:28.450322] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:12.643 [2024-11-20 08:57:28.451323] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:13:12.643 [2024-11-20 08:57:28.451330] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:12.643 [2024-11-20 08:57:28.451335] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:12.643 [2024-11-20 08:57:28.451341] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:12.643 [2024-11-20 08:57:28.451448] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:13:12.643 [2024-11-20 08:57:28.451452] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:12.643 [2024-11-20 08:57:28.451457] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:13:12.643 [2024-11-20 08:57:28.452332] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:13:12.643 [2024-11-20 08:57:28.453333] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:13:12.643 [2024-11-20 08:57:28.454342] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:12.643 [2024-11-20 08:57:28.455340] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:12.643 [2024-11-20 08:57:28.455419] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:12.643 [2024-11-20 08:57:28.456353] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:13:12.643 [2024-11-20 08:57:28.456360] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:12.643 [2024-11-20 08:57:28.456365] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:12.643 [2024-11-20 08:57:28.456381] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:13:12.643 [2024-11-20 08:57:28.456392] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:12.643 [2024-11-20 08:57:28.456407] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:12.643 [2024-11-20 08:57:28.456411] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:12.643 [2024-11-20 08:57:28.456415] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:12.643 [2024-11-20 08:57:28.456427] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:12.644 [2024-11-20 08:57:28.456475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:12.644 [2024-11-20 08:57:28.456484] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:13:12.644 [2024-11-20 08:57:28.456488] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:13:12.644 [2024-11-20 08:57:28.456492] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:13:12.644 [2024-11-20 08:57:28.456496] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:12.644 [2024-11-20 08:57:28.456502] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:13:12.644 [2024-11-20 08:57:28.456507] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:13:12.644 [2024-11-20 08:57:28.456511] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:13:12.644 [2024-11-20 08:57:28.456518] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:12.644 [2024-11-20 08:57:28.456528] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:12.644 [2024-11-20 08:57:28.456543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:12.644 [2024-11-20 08:57:28.456552] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.644 [2024-11-20 08:57:28.456560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.644 [2024-11-20 08:57:28.456568] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.644 [2024-11-20 08:57:28.456575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:12.644 [2024-11-20 08:57:28.456579] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:12.644 [2024-11-20 08:57:28.456585] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:12.644 [2024-11-20 08:57:28.456593] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:12.644 [2024-11-20 08:57:28.456605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:12.644 [2024-11-20 08:57:28.456611] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:13:12.644 [2024-11-20 08:57:28.456616] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:12.644 [2024-11-20 08:57:28.456622] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:13:12.644 [2024-11-20 08:57:28.456627] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:12.644 [2024-11-20 08:57:28.456635] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:12.644 [2024-11-20 08:57:28.456645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:12.644 [2024-11-20 08:57:28.456695] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:13:12.644 [2024-11-20 08:57:28.456702] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:12.644 [2024-11-20 08:57:28.456709] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:12.644 [2024-11-20 08:57:28.456713] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:12.644 [2024-11-20 08:57:28.456716] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:12.644 [2024-11-20 08:57:28.456722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:12.644 [2024-11-20 08:57:28.456735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:12.644 [2024-11-20 08:57:28.456743] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:13:12.644 [2024-11-20 08:57:28.456750] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:13:12.644 [2024-11-20 08:57:28.456756] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:12.644 [2024-11-20 08:57:28.456763] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:12.644 [2024-11-20 08:57:28.456766] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:12.644 [2024-11-20 08:57:28.456770] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:12.644 [2024-11-20 08:57:28.456775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:12.644 [2024-11-20 08:57:28.456795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:12.644 [2024-11-20 08:57:28.456805] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:12.644 [2024-11-20 08:57:28.456812] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:12.644 [2024-11-20 08:57:28.456818] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:12.644 [2024-11-20 08:57:28.456822] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:12.644 [2024-11-20 08:57:28.456825] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:12.644 [2024-11-20 08:57:28.456831] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:12.644 [2024-11-20 08:57:28.456842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:12.644 [2024-11-20 08:57:28.456849] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:12.644 [2024-11-20 08:57:28.456855] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:12.644 [2024-11-20 08:57:28.456862] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:13:12.644 [2024-11-20 08:57:28.456868] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:12.644 [2024-11-20 08:57:28.456873] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:12.644 [2024-11-20 08:57:28.456878] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:13:12.644 [2024-11-20 08:57:28.456882] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:12.644 [2024-11-20 08:57:28.456886] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:13:12.644 [2024-11-20 08:57:28.456890] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:13:12.644 [2024-11-20 08:57:28.456907] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:12.644 [2024-11-20 08:57:28.456917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:12.644 [2024-11-20 08:57:28.456928] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:12.644 [2024-11-20 08:57:28.456938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:12.644 [2024-11-20 08:57:28.456952] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:12.644 [2024-11-20 08:57:28.456963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:12.644 [2024-11-20 08:57:28.456973] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:12.644 [2024-11-20 08:57:28.456985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:12.644 [2024-11-20 08:57:28.456997] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:12.644 [2024-11-20 08:57:28.457001] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:12.644 [2024-11-20 08:57:28.457004] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:12.644 [2024-11-20 08:57:28.457007] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:12.644 [2024-11-20 08:57:28.457010] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:12.644 [2024-11-20 08:57:28.457016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:12.644 [2024-11-20 08:57:28.457022] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:12.644 [2024-11-20 08:57:28.457026] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:12.644 [2024-11-20 08:57:28.457029] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:12.644 [2024-11-20 08:57:28.457035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:12.644 [2024-11-20 08:57:28.457041] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:12.644 [2024-11-20 08:57:28.457045] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:12.644 [2024-11-20 08:57:28.457048] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:12.644 [2024-11-20 08:57:28.457053] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:12.644 [2024-11-20 08:57:28.457061] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:12.644 [2024-11-20 08:57:28.457065] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:12.644 [2024-11-20 08:57:28.457068] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:12.645 [2024-11-20 08:57:28.457074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:12.645 [2024-11-20 08:57:28.457080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:12.645 [2024-11-20 08:57:28.457091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:12.645 [2024-11-20 08:57:28.457101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:12.645 [2024-11-20 08:57:28.457107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:12.645 ===================================================== 00:13:12.645 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:12.645 ===================================================== 00:13:12.645 Controller Capabilities/Features 00:13:12.645 ================================ 00:13:12.645 Vendor ID: 4e58 00:13:12.645 Subsystem Vendor ID: 4e58 00:13:12.645 Serial Number: SPDK1 00:13:12.645 Model Number: SPDK bdev Controller 00:13:12.645 Firmware Version: 25.01 00:13:12.645 Recommended Arb Burst: 6 00:13:12.645 IEEE OUI Identifier: 8d 6b 50 00:13:12.645 Multi-path I/O 00:13:12.645 May have multiple subsystem ports: Yes 00:13:12.645 May have multiple controllers: Yes 00:13:12.645 Associated with SR-IOV VF: No 00:13:12.645 Max Data Transfer Size: 131072 00:13:12.645 Max Number of Namespaces: 32 00:13:12.645 Max Number of I/O Queues: 127 00:13:12.645 NVMe Specification Version (VS): 1.3 00:13:12.645 NVMe Specification Version (Identify): 1.3 00:13:12.645 Maximum Queue Entries: 256 00:13:12.645 Contiguous Queues Required: Yes 00:13:12.645 Arbitration Mechanisms Supported 00:13:12.645 Weighted Round Robin: Not Supported 00:13:12.645 Vendor Specific: Not Supported 00:13:12.645 Reset Timeout: 15000 ms 00:13:12.645 Doorbell Stride: 4 bytes 00:13:12.645 NVM Subsystem Reset: Not Supported 00:13:12.645 Command Sets Supported 00:13:12.645 NVM Command Set: Supported 00:13:12.645 Boot Partition: Not Supported 00:13:12.645 Memory Page Size Minimum: 4096 bytes 00:13:12.645 Memory Page Size Maximum: 4096 bytes 00:13:12.645 Persistent Memory Region: Not Supported 00:13:12.645 Optional Asynchronous Events Supported 00:13:12.645 Namespace Attribute Notices: Supported 00:13:12.645 Firmware Activation Notices: Not Supported 00:13:12.645 ANA Change Notices: Not Supported 00:13:12.645 PLE Aggregate Log Change Notices: Not Supported 00:13:12.645 LBA Status Info Alert Notices: Not Supported 00:13:12.645 EGE Aggregate Log Change Notices: Not Supported 00:13:12.645 Normal NVM Subsystem Shutdown event: Not Supported 00:13:12.645 Zone Descriptor Change Notices: Not Supported 00:13:12.645 Discovery Log Change Notices: Not Supported 00:13:12.645 Controller Attributes 00:13:12.645 128-bit Host Identifier: Supported 00:13:12.645 Non-Operational Permissive Mode: Not Supported 00:13:12.645 NVM Sets: Not Supported 00:13:12.645 Read Recovery Levels: Not Supported 00:13:12.645 Endurance Groups: Not Supported 00:13:12.645 Predictable Latency Mode: Not Supported 00:13:12.645 Traffic Based Keep ALive: Not Supported 00:13:12.645 Namespace Granularity: Not Supported 00:13:12.645 SQ Associations: Not Supported 00:13:12.645 UUID List: Not Supported 00:13:12.645 Multi-Domain Subsystem: Not Supported 00:13:12.645 Fixed Capacity Management: Not Supported 00:13:12.645 Variable Capacity Management: Not Supported 00:13:12.645 Delete Endurance Group: Not Supported 00:13:12.645 Delete NVM Set: Not Supported 00:13:12.645 Extended LBA Formats Supported: Not Supported 00:13:12.645 Flexible Data Placement Supported: Not Supported 00:13:12.645 00:13:12.645 Controller Memory Buffer Support 00:13:12.645 ================================ 00:13:12.645 Supported: No 00:13:12.645 00:13:12.645 Persistent Memory Region Support 00:13:12.645 ================================ 00:13:12.645 Supported: No 00:13:12.645 00:13:12.645 Admin Command Set Attributes 00:13:12.645 ============================ 00:13:12.645 Security Send/Receive: Not Supported 00:13:12.645 Format NVM: Not Supported 00:13:12.645 Firmware Activate/Download: Not Supported 00:13:12.645 Namespace Management: Not Supported 00:13:12.645 Device Self-Test: Not Supported 00:13:12.645 Directives: Not Supported 00:13:12.645 NVMe-MI: Not Supported 00:13:12.645 Virtualization Management: Not Supported 00:13:12.645 Doorbell Buffer Config: Not Supported 00:13:12.645 Get LBA Status Capability: Not Supported 00:13:12.645 Command & Feature Lockdown Capability: Not Supported 00:13:12.645 Abort Command Limit: 4 00:13:12.645 Async Event Request Limit: 4 00:13:12.645 Number of Firmware Slots: N/A 00:13:12.645 Firmware Slot 1 Read-Only: N/A 00:13:12.645 Firmware Activation Without Reset: N/A 00:13:12.645 Multiple Update Detection Support: N/A 00:13:12.645 Firmware Update Granularity: No Information Provided 00:13:12.645 Per-Namespace SMART Log: No 00:13:12.645 Asymmetric Namespace Access Log Page: Not Supported 00:13:12.645 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:13:12.645 Command Effects Log Page: Supported 00:13:12.645 Get Log Page Extended Data: Supported 00:13:12.645 Telemetry Log Pages: Not Supported 00:13:12.645 Persistent Event Log Pages: Not Supported 00:13:12.645 Supported Log Pages Log Page: May Support 00:13:12.645 Commands Supported & Effects Log Page: Not Supported 00:13:12.645 Feature Identifiers & Effects Log Page:May Support 00:13:12.645 NVMe-MI Commands & Effects Log Page: May Support 00:13:12.645 Data Area 4 for Telemetry Log: Not Supported 00:13:12.645 Error Log Page Entries Supported: 128 00:13:12.645 Keep Alive: Supported 00:13:12.645 Keep Alive Granularity: 10000 ms 00:13:12.645 00:13:12.645 NVM Command Set Attributes 00:13:12.645 ========================== 00:13:12.645 Submission Queue Entry Size 00:13:12.645 Max: 64 00:13:12.645 Min: 64 00:13:12.645 Completion Queue Entry Size 00:13:12.645 Max: 16 00:13:12.645 Min: 16 00:13:12.645 Number of Namespaces: 32 00:13:12.645 Compare Command: Supported 00:13:12.645 Write Uncorrectable Command: Not Supported 00:13:12.645 Dataset Management Command: Supported 00:13:12.645 Write Zeroes Command: Supported 00:13:12.645 Set Features Save Field: Not Supported 00:13:12.645 Reservations: Not Supported 00:13:12.645 Timestamp: Not Supported 00:13:12.645 Copy: Supported 00:13:12.645 Volatile Write Cache: Present 00:13:12.645 Atomic Write Unit (Normal): 1 00:13:12.645 Atomic Write Unit (PFail): 1 00:13:12.645 Atomic Compare & Write Unit: 1 00:13:12.645 Fused Compare & Write: Supported 00:13:12.645 Scatter-Gather List 00:13:12.645 SGL Command Set: Supported (Dword aligned) 00:13:12.645 SGL Keyed: Not Supported 00:13:12.645 SGL Bit Bucket Descriptor: Not Supported 00:13:12.645 SGL Metadata Pointer: Not Supported 00:13:12.645 Oversized SGL: Not Supported 00:13:12.645 SGL Metadata Address: Not Supported 00:13:12.645 SGL Offset: Not Supported 00:13:12.645 Transport SGL Data Block: Not Supported 00:13:12.645 Replay Protected Memory Block: Not Supported 00:13:12.645 00:13:12.645 Firmware Slot Information 00:13:12.645 ========================= 00:13:12.645 Active slot: 1 00:13:12.645 Slot 1 Firmware Revision: 25.01 00:13:12.645 00:13:12.645 00:13:12.645 Commands Supported and Effects 00:13:12.645 ============================== 00:13:12.645 Admin Commands 00:13:12.645 -------------- 00:13:12.645 Get Log Page (02h): Supported 00:13:12.645 Identify (06h): Supported 00:13:12.645 Abort (08h): Supported 00:13:12.645 Set Features (09h): Supported 00:13:12.645 Get Features (0Ah): Supported 00:13:12.645 Asynchronous Event Request (0Ch): Supported 00:13:12.645 Keep Alive (18h): Supported 00:13:12.645 I/O Commands 00:13:12.645 ------------ 00:13:12.645 Flush (00h): Supported LBA-Change 00:13:12.645 Write (01h): Supported LBA-Change 00:13:12.645 Read (02h): Supported 00:13:12.645 Compare (05h): Supported 00:13:12.645 Write Zeroes (08h): Supported LBA-Change 00:13:12.645 Dataset Management (09h): Supported LBA-Change 00:13:12.645 Copy (19h): Supported LBA-Change 00:13:12.645 00:13:12.645 Error Log 00:13:12.645 ========= 00:13:12.645 00:13:12.645 Arbitration 00:13:12.645 =========== 00:13:12.645 Arbitration Burst: 1 00:13:12.645 00:13:12.645 Power Management 00:13:12.645 ================ 00:13:12.645 Number of Power States: 1 00:13:12.645 Current Power State: Power State #0 00:13:12.645 Power State #0: 00:13:12.645 Max Power: 0.00 W 00:13:12.645 Non-Operational State: Operational 00:13:12.645 Entry Latency: Not Reported 00:13:12.645 Exit Latency: Not Reported 00:13:12.645 Relative Read Throughput: 0 00:13:12.645 Relative Read Latency: 0 00:13:12.645 Relative Write Throughput: 0 00:13:12.645 Relative Write Latency: 0 00:13:12.645 Idle Power: Not Reported 00:13:12.645 Active Power: Not Reported 00:13:12.645 Non-Operational Permissive Mode: Not Supported 00:13:12.645 00:13:12.646 Health Information 00:13:12.646 ================== 00:13:12.646 Critical Warnings: 00:13:12.646 Available Spare Space: OK 00:13:12.646 Temperature: OK 00:13:12.646 Device Reliability: OK 00:13:12.646 Read Only: No 00:13:12.646 Volatile Memory Backup: OK 00:13:12.646 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:12.646 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:12.646 Available Spare: 0% 00:13:12.646 Available Sp[2024-11-20 08:57:28.457197] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:12.646 [2024-11-20 08:57:28.457205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:12.646 [2024-11-20 08:57:28.457229] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:13:12.646 [2024-11-20 08:57:28.457237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.646 [2024-11-20 08:57:28.457243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.646 [2024-11-20 08:57:28.457248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.646 [2024-11-20 08:57:28.457254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:12.646 [2024-11-20 08:57:28.459954] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:13:12.646 [2024-11-20 08:57:28.459965] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:13:12.646 [2024-11-20 08:57:28.460377] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:12.646 [2024-11-20 08:57:28.460426] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:13:12.646 [2024-11-20 08:57:28.460433] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:13:12.646 [2024-11-20 08:57:28.461378] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:13:12.646 [2024-11-20 08:57:28.461389] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:13:12.646 [2024-11-20 08:57:28.461435] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:13:12.646 [2024-11-20 08:57:28.463413] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:12.646 are Threshold: 0% 00:13:12.646 Life Percentage Used: 0% 00:13:12.646 Data Units Read: 0 00:13:12.646 Data Units Written: 0 00:13:12.646 Host Read Commands: 0 00:13:12.646 Host Write Commands: 0 00:13:12.646 Controller Busy Time: 0 minutes 00:13:12.646 Power Cycles: 0 00:13:12.646 Power On Hours: 0 hours 00:13:12.646 Unsafe Shutdowns: 0 00:13:12.646 Unrecoverable Media Errors: 0 00:13:12.646 Lifetime Error Log Entries: 0 00:13:12.646 Warning Temperature Time: 0 minutes 00:13:12.646 Critical Temperature Time: 0 minutes 00:13:12.646 00:13:12.646 Number of Queues 00:13:12.646 ================ 00:13:12.646 Number of I/O Submission Queues: 127 00:13:12.646 Number of I/O Completion Queues: 127 00:13:12.646 00:13:12.646 Active Namespaces 00:13:12.646 ================= 00:13:12.646 Namespace ID:1 00:13:12.646 Error Recovery Timeout: Unlimited 00:13:12.646 Command Set Identifier: NVM (00h) 00:13:12.646 Deallocate: Supported 00:13:12.646 Deallocated/Unwritten Error: Not Supported 00:13:12.646 Deallocated Read Value: Unknown 00:13:12.646 Deallocate in Write Zeroes: Not Supported 00:13:12.646 Deallocated Guard Field: 0xFFFF 00:13:12.646 Flush: Supported 00:13:12.646 Reservation: Supported 00:13:12.646 Namespace Sharing Capabilities: Multiple Controllers 00:13:12.646 Size (in LBAs): 131072 (0GiB) 00:13:12.646 Capacity (in LBAs): 131072 (0GiB) 00:13:12.646 Utilization (in LBAs): 131072 (0GiB) 00:13:12.646 NGUID: 2A2F796BCCFD4A968ADAEA6B1F3D2F51 00:13:12.646 UUID: 2a2f796b-ccfd-4a96-8ada-ea6b1f3d2f51 00:13:12.646 Thin Provisioning: Not Supported 00:13:12.646 Per-NS Atomic Units: Yes 00:13:12.646 Atomic Boundary Size (Normal): 0 00:13:12.646 Atomic Boundary Size (PFail): 0 00:13:12.646 Atomic Boundary Offset: 0 00:13:12.646 Maximum Single Source Range Length: 65535 00:13:12.646 Maximum Copy Length: 65535 00:13:12.646 Maximum Source Range Count: 1 00:13:12.646 NGUID/EUI64 Never Reused: No 00:13:12.646 Namespace Write Protected: No 00:13:12.646 Number of LBA Formats: 1 00:13:12.646 Current LBA Format: LBA Format #00 00:13:12.646 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:12.646 00:13:12.646 08:57:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:12.903 [2024-11-20 08:57:28.689756] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:18.318 Initializing NVMe Controllers 00:13:18.318 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:18.318 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:18.318 Initialization complete. Launching workers. 00:13:18.318 ======================================================== 00:13:18.318 Latency(us) 00:13:18.318 Device Information : IOPS MiB/s Average min max 00:13:18.318 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39954.20 156.07 3204.18 954.78 8593.39 00:13:18.318 ======================================================== 00:13:18.318 Total : 39954.20 156.07 3204.18 954.78 8593.39 00:13:18.318 00:13:18.318 [2024-11-20 08:57:33.711129] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:18.318 08:57:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:18.318 [2024-11-20 08:57:33.947235] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:23.572 Initializing NVMe Controllers 00:13:23.572 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:23.572 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:13:23.572 Initialization complete. Launching workers. 00:13:23.572 ======================================================== 00:13:23.572 Latency(us) 00:13:23.572 Device Information : IOPS MiB/s Average min max 00:13:23.572 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16039.26 62.65 7979.73 6960.30 8982.39 00:13:23.572 ======================================================== 00:13:23.572 Total : 16039.26 62.65 7979.73 6960.30 8982.39 00:13:23.572 00:13:23.572 [2024-11-20 08:57:38.981971] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:23.572 08:57:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:23.572 [2024-11-20 08:57:39.186938] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:28.832 [2024-11-20 08:57:44.260227] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:28.832 Initializing NVMe Controllers 00:13:28.832 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:28.832 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:13:28.832 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:13:28.832 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:13:28.832 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:13:28.832 Initialization complete. Launching workers. 00:13:28.832 Starting thread on core 2 00:13:28.832 Starting thread on core 3 00:13:28.832 Starting thread on core 1 00:13:28.832 08:57:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:13:28.832 [2024-11-20 08:57:44.565379] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:32.111 [2024-11-20 08:57:47.989145] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:32.111 Initializing NVMe Controllers 00:13:32.111 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:32.111 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:32.111 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:13:32.111 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:13:32.111 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:13:32.111 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:13:32.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:32.111 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:32.111 Initialization complete. Launching workers. 00:13:32.111 Starting thread on core 1 with urgent priority queue 00:13:32.111 Starting thread on core 2 with urgent priority queue 00:13:32.111 Starting thread on core 3 with urgent priority queue 00:13:32.111 Starting thread on core 0 with urgent priority queue 00:13:32.111 SPDK bdev Controller (SPDK1 ) core 0: 916.67 IO/s 109.09 secs/100000 ios 00:13:32.111 SPDK bdev Controller (SPDK1 ) core 1: 1278.00 IO/s 78.25 secs/100000 ios 00:13:32.111 SPDK bdev Controller (SPDK1 ) core 2: 1199.67 IO/s 83.36 secs/100000 ios 00:13:32.111 SPDK bdev Controller (SPDK1 ) core 3: 962.00 IO/s 103.95 secs/100000 ios 00:13:32.111 ======================================================== 00:13:32.111 00:13:32.111 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:32.368 [2024-11-20 08:57:48.274978] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:32.368 Initializing NVMe Controllers 00:13:32.368 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:32.368 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:32.368 Namespace ID: 1 size: 0GB 00:13:32.368 Initialization complete. 00:13:32.368 INFO: using host memory buffer for IO 00:13:32.368 Hello world! 00:13:32.368 [2024-11-20 08:57:48.313197] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:32.368 08:57:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:13:32.625 [2024-11-20 08:57:48.598339] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:33.996 Initializing NVMe Controllers 00:13:33.996 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:33.996 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:33.996 Initialization complete. Launching workers. 00:13:33.996 submit (in ns) avg, min, max = 8412.6, 3280.9, 4000680.0 00:13:33.996 complete (in ns) avg, min, max = 20079.8, 1776.5, 4073906.1 00:13:33.996 00:13:33.996 Submit histogram 00:13:33.996 ================ 00:13:33.996 Range in us Cumulative Count 00:13:33.996 3.270 - 3.283: 0.0061% ( 1) 00:13:33.996 3.283 - 3.297: 0.0799% ( 12) 00:13:33.996 3.297 - 3.311: 0.1845% ( 17) 00:13:33.996 3.311 - 3.325: 0.4120% ( 37) 00:13:33.996 3.325 - 3.339: 0.8179% ( 66) 00:13:33.996 3.339 - 3.353: 2.7549% ( 315) 00:13:33.996 3.353 - 3.367: 7.4038% ( 756) 00:13:33.996 3.367 - 3.381: 13.4301% ( 980) 00:13:33.996 3.381 - 3.395: 19.8500% ( 1044) 00:13:33.996 3.395 - 3.409: 25.9132% ( 986) 00:13:33.996 3.409 - 3.423: 32.0010% ( 990) 00:13:33.996 3.423 - 3.437: 37.4985% ( 894) 00:13:33.996 3.437 - 3.450: 43.1374% ( 917) 00:13:33.996 3.450 - 3.464: 47.7309% ( 747) 00:13:33.996 3.464 - 3.478: 51.7157% ( 648) 00:13:33.996 3.478 - 3.492: 56.5921% ( 793) 00:13:33.996 3.492 - 3.506: 63.9589% ( 1198) 00:13:33.996 3.506 - 3.520: 69.8131% ( 952) 00:13:33.996 3.520 - 3.534: 73.8409% ( 655) 00:13:33.996 3.534 - 3.548: 78.8033% ( 807) 00:13:33.996 3.548 - 3.562: 82.8557% ( 659) 00:13:33.996 3.562 - 3.590: 86.6007% ( 609) 00:13:33.996 3.590 - 3.617: 87.7321% ( 184) 00:13:33.996 3.617 - 3.645: 88.4270% ( 113) 00:13:33.996 3.645 - 3.673: 89.7307% ( 212) 00:13:33.996 3.673 - 3.701: 91.5693% ( 299) 00:13:33.996 3.701 - 3.729: 93.3096% ( 283) 00:13:33.996 3.729 - 3.757: 94.9145% ( 261) 00:13:33.996 3.757 - 3.784: 96.5625% ( 268) 00:13:33.996 3.784 - 3.812: 97.8047% ( 202) 00:13:33.996 3.812 - 3.840: 98.5119% ( 115) 00:13:33.996 3.840 - 3.868: 99.0961% ( 95) 00:13:33.996 3.868 - 3.896: 99.3666% ( 44) 00:13:33.996 3.896 - 3.923: 99.4712% ( 17) 00:13:33.996 3.923 - 3.951: 99.5142% ( 7) 00:13:33.996 3.951 - 3.979: 99.5204% ( 1) 00:13:33.996 3.979 - 4.007: 99.5327% ( 2) 00:13:33.996 4.035 - 4.063: 99.5388% ( 1) 00:13:33.996 4.063 - 4.090: 99.5450% ( 1) 00:13:33.996 4.118 - 4.146: 99.5511% ( 1) 00:13:33.996 4.313 - 4.341: 99.5573% ( 1) 00:13:33.996 5.231 - 5.259: 99.5634% ( 1) 00:13:33.996 5.343 - 5.370: 99.5695% ( 1) 00:13:33.996 5.370 - 5.398: 99.5757% ( 1) 00:13:33.996 5.398 - 5.426: 99.6003% ( 4) 00:13:33.996 5.454 - 5.482: 99.6126% ( 2) 00:13:33.996 5.510 - 5.537: 99.6249% ( 2) 00:13:33.996 5.565 - 5.593: 99.6310% ( 1) 00:13:33.996 5.593 - 5.621: 99.6372% ( 1) 00:13:33.996 5.677 - 5.704: 99.6495% ( 2) 00:13:33.996 5.732 - 5.760: 99.6556% ( 1) 00:13:33.996 5.760 - 5.788: 99.6618% ( 1) 00:13:33.996 5.843 - 5.871: 99.6679% ( 1) 00:13:33.996 5.871 - 5.899: 99.6864% ( 3) 00:13:33.996 6.038 - 6.066: 99.6925% ( 1) 00:13:33.996 6.066 - 6.094: 99.7048% ( 2) 00:13:33.996 6.094 - 6.122: 99.7110% ( 1) 00:13:33.996 6.177 - 6.205: 99.7171% ( 1) 00:13:33.996 6.205 - 6.233: 99.7233% ( 1) 00:13:33.996 6.233 - 6.261: 99.7294% ( 1) 00:13:33.996 6.289 - 6.317: 99.7356% ( 1) 00:13:33.996 6.456 - 6.483: 99.7417% ( 1) 00:13:33.996 6.483 - 6.511: 99.7479% ( 1) 00:13:33.996 6.539 - 6.567: 99.7602% ( 2) 00:13:33.996 6.567 - 6.595: 99.7786% ( 3) 00:13:33.996 6.595 - 6.623: 99.7848% ( 1) 00:13:33.996 6.678 - 6.706: 99.7909% ( 1) 00:13:33.996 7.012 - 7.040: 99.7971% ( 1) 00:13:33.996 7.123 - 7.179: 99.8155% ( 3) 00:13:33.996 7.402 - 7.457: 99.8217% ( 1) 00:13:33.996 7.513 - 7.569: 99.8278% ( 1) 00:13:33.996 7.847 - 7.903: 99.8340% ( 1) 00:13:33.996 8.237 - 8.292: 99.8401% ( 1) 00:13:33.996 8.459 - 8.515: 99.8524% ( 2) 00:13:33.996 8.682 - 8.737: 99.8586% ( 1) 00:13:33.996 8.793 - 8.849: 99.8647% ( 1) 00:13:33.996 9.405 - 9.461: 99.8709% ( 1) 00:13:33.996 9.739 - 9.795: 99.8770% ( 1) 00:13:33.996 3989.148 - 4017.642: 100.0000% ( 20) 00:13:33.996 00:13:33.996 [2024-11-20 08:57:49.619176] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:33.996 Complete histogram 00:13:33.996 ================== 00:13:33.996 Range in us Cumulative Count 00:13:33.996 1.774 - 1.781: 0.0123% ( 2) 00:13:33.996 1.809 - 1.823: 0.0922% ( 13) 00:13:33.996 1.823 - 1.837: 1.0577% ( 157) 00:13:33.996 1.837 - 1.850: 2.7241% ( 271) 00:13:33.996 1.850 - 1.864: 4.0770% ( 220) 00:13:33.996 1.864 - 1.878: 27.3337% ( 3782) 00:13:33.996 1.878 - 1.892: 78.5512% ( 8329) 00:13:33.996 1.892 - 1.906: 89.6200% ( 1800) 00:13:33.996 1.906 - 1.920: 94.0044% ( 713) 00:13:33.996 1.920 - 1.934: 94.9699% ( 157) 00:13:33.996 1.934 - 1.948: 96.0214% ( 171) 00:13:33.996 1.948 - 1.962: 98.1060% ( 339) 00:13:33.996 1.962 - 1.976: 99.0530% ( 154) 00:13:33.996 1.976 - 1.990: 99.2252% ( 28) 00:13:33.996 1.990 - 2.003: 99.2805% ( 9) 00:13:33.996 2.003 - 2.017: 99.2867% ( 1) 00:13:33.996 2.017 - 2.031: 99.2990% ( 2) 00:13:33.996 2.031 - 2.045: 99.3174% ( 3) 00:13:33.996 2.045 - 2.059: 99.3236% ( 1) 00:13:33.996 2.059 - 2.073: 99.3297% ( 1) 00:13:33.996 2.073 - 2.087: 99.3359% ( 1) 00:13:33.996 2.087 - 2.101: 99.3482% ( 2) 00:13:33.996 2.101 - 2.115: 99.3543% ( 1) 00:13:33.996 2.115 - 2.129: 99.3605% ( 1) 00:13:33.996 2.129 - 2.143: 99.3666% ( 1) 00:13:33.996 2.157 - 2.170: 99.3728% ( 1) 00:13:33.996 2.226 - 2.240: 99.3789% ( 1) 00:13:33.996 2.268 - 2.282: 99.3851% ( 1) 00:13:33.996 2.296 - 2.310: 99.3912% ( 1) 00:13:33.996 2.323 - 2.337: 99.3974% ( 1) 00:13:33.996 2.351 - 2.365: 99.4035% ( 1) 00:13:33.996 3.757 - 3.784: 99.4097% ( 1) 00:13:33.996 3.896 - 3.923: 99.4158% ( 1) 00:13:33.996 3.923 - 3.951: 99.4220% ( 1) 00:13:33.996 3.979 - 4.007: 99.4281% ( 1) 00:13:33.996 4.007 - 4.035: 99.4343% ( 1) 00:13:33.996 4.090 - 4.118: 99.4404% ( 1) 00:13:33.996 4.118 - 4.146: 99.4466% ( 1) 00:13:33.996 4.202 - 4.230: 99.4527% ( 1) 00:13:33.996 4.313 - 4.341: 99.4589% ( 1) 00:13:33.996 4.369 - 4.397: 99.4650% ( 1) 00:13:33.996 4.424 - 4.452: 99.4712% ( 1) 00:13:33.997 4.508 - 4.536: 99.4773% ( 1) 00:13:33.997 4.675 - 4.703: 99.4835% ( 1) 00:13:33.997 4.786 - 4.814: 99.4896% ( 1) 00:13:33.997 5.259 - 5.287: 99.4958% ( 1) 00:13:33.997 5.398 - 5.426: 99.5019% ( 1) 00:13:33.997 5.621 - 5.649: 99.5081% ( 1) 00:13:33.997 5.843 - 5.871: 99.5142% ( 1) 00:13:33.997 6.038 - 6.066: 99.5204% ( 1) 00:13:33.997 6.066 - 6.094: 99.5265% ( 1) 00:13:33.997 6.539 - 6.567: 99.5327% ( 1) 00:13:33.997 6.790 - 6.817: 99.5388% ( 1) 00:13:33.997 144.250 - 145.141: 99.5450% ( 1) 00:13:33.997 3989.148 - 4017.642: 99.9939% ( 73) 00:13:33.997 4046.136 - 4074.630: 100.0000% ( 1) 00:13:33.997 00:13:33.997 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:13:33.997 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:13:33.997 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:13:33.997 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:13:33.997 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:33.997 [ 00:13:33.997 { 00:13:33.997 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:33.997 "subtype": "Discovery", 00:13:33.997 "listen_addresses": [], 00:13:33.997 "allow_any_host": true, 00:13:33.997 "hosts": [] 00:13:33.997 }, 00:13:33.997 { 00:13:33.997 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:33.997 "subtype": "NVMe", 00:13:33.997 "listen_addresses": [ 00:13:33.997 { 00:13:33.997 "trtype": "VFIOUSER", 00:13:33.997 "adrfam": "IPv4", 00:13:33.997 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:33.997 "trsvcid": "0" 00:13:33.997 } 00:13:33.997 ], 00:13:33.997 "allow_any_host": true, 00:13:33.997 "hosts": [], 00:13:33.997 "serial_number": "SPDK1", 00:13:33.997 "model_number": "SPDK bdev Controller", 00:13:33.997 "max_namespaces": 32, 00:13:33.997 "min_cntlid": 1, 00:13:33.997 "max_cntlid": 65519, 00:13:33.997 "namespaces": [ 00:13:33.997 { 00:13:33.997 "nsid": 1, 00:13:33.997 "bdev_name": "Malloc1", 00:13:33.997 "name": "Malloc1", 00:13:33.997 "nguid": "2A2F796BCCFD4A968ADAEA6B1F3D2F51", 00:13:33.997 "uuid": "2a2f796b-ccfd-4a96-8ada-ea6b1f3d2f51" 00:13:33.997 } 00:13:33.997 ] 00:13:33.997 }, 00:13:33.997 { 00:13:33.997 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:33.997 "subtype": "NVMe", 00:13:33.997 "listen_addresses": [ 00:13:33.997 { 00:13:33.997 "trtype": "VFIOUSER", 00:13:33.997 "adrfam": "IPv4", 00:13:33.997 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:33.997 "trsvcid": "0" 00:13:33.997 } 00:13:33.997 ], 00:13:33.997 "allow_any_host": true, 00:13:33.997 "hosts": [], 00:13:33.997 "serial_number": "SPDK2", 00:13:33.997 "model_number": "SPDK bdev Controller", 00:13:33.997 "max_namespaces": 32, 00:13:33.997 "min_cntlid": 1, 00:13:33.997 "max_cntlid": 65519, 00:13:33.997 "namespaces": [ 00:13:33.997 { 00:13:33.997 "nsid": 1, 00:13:33.997 "bdev_name": "Malloc2", 00:13:33.997 "name": "Malloc2", 00:13:33.997 "nguid": "701B5BBDAF6B4C88B2692C56818C5396", 00:13:33.997 "uuid": "701b5bbd-af6b-4c88-b269-2c56818c5396" 00:13:33.997 } 00:13:33.997 ] 00:13:33.997 } 00:13:33.997 ] 00:13:33.997 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:33.997 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2301069 00:13:33.997 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:33.997 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:13:33.997 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:13:33.997 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:33.997 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:33.997 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:13:33.997 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:33.997 08:57:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:13:33.997 [2024-11-20 08:57:50.030399] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:13:34.254 Malloc3 00:13:34.254 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:13:34.254 [2024-11-20 08:57:50.264201] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:13:34.254 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:34.511 Asynchronous Event Request test 00:13:34.511 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:13:34.511 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:13:34.511 Registering asynchronous event callbacks... 00:13:34.511 Starting namespace attribute notice tests for all controllers... 00:13:34.511 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:34.511 aer_cb - Changed Namespace 00:13:34.511 Cleaning up... 00:13:34.511 [ 00:13:34.511 { 00:13:34.511 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:34.511 "subtype": "Discovery", 00:13:34.511 "listen_addresses": [], 00:13:34.511 "allow_any_host": true, 00:13:34.511 "hosts": [] 00:13:34.511 }, 00:13:34.511 { 00:13:34.511 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:34.511 "subtype": "NVMe", 00:13:34.511 "listen_addresses": [ 00:13:34.511 { 00:13:34.511 "trtype": "VFIOUSER", 00:13:34.511 "adrfam": "IPv4", 00:13:34.511 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:34.511 "trsvcid": "0" 00:13:34.511 } 00:13:34.511 ], 00:13:34.511 "allow_any_host": true, 00:13:34.511 "hosts": [], 00:13:34.511 "serial_number": "SPDK1", 00:13:34.511 "model_number": "SPDK bdev Controller", 00:13:34.511 "max_namespaces": 32, 00:13:34.511 "min_cntlid": 1, 00:13:34.511 "max_cntlid": 65519, 00:13:34.511 "namespaces": [ 00:13:34.511 { 00:13:34.511 "nsid": 1, 00:13:34.511 "bdev_name": "Malloc1", 00:13:34.511 "name": "Malloc1", 00:13:34.511 "nguid": "2A2F796BCCFD4A968ADAEA6B1F3D2F51", 00:13:34.511 "uuid": "2a2f796b-ccfd-4a96-8ada-ea6b1f3d2f51" 00:13:34.511 }, 00:13:34.511 { 00:13:34.511 "nsid": 2, 00:13:34.511 "bdev_name": "Malloc3", 00:13:34.511 "name": "Malloc3", 00:13:34.511 "nguid": "54FDABB8E31C473D96F93D319B03D835", 00:13:34.511 "uuid": "54fdabb8-e31c-473d-96f9-3d319b03d835" 00:13:34.511 } 00:13:34.511 ] 00:13:34.511 }, 00:13:34.511 { 00:13:34.511 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:34.511 "subtype": "NVMe", 00:13:34.511 "listen_addresses": [ 00:13:34.511 { 00:13:34.511 "trtype": "VFIOUSER", 00:13:34.511 "adrfam": "IPv4", 00:13:34.511 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:34.511 "trsvcid": "0" 00:13:34.511 } 00:13:34.511 ], 00:13:34.511 "allow_any_host": true, 00:13:34.511 "hosts": [], 00:13:34.511 "serial_number": "SPDK2", 00:13:34.511 "model_number": "SPDK bdev Controller", 00:13:34.511 "max_namespaces": 32, 00:13:34.511 "min_cntlid": 1, 00:13:34.511 "max_cntlid": 65519, 00:13:34.511 "namespaces": [ 00:13:34.511 { 00:13:34.511 "nsid": 1, 00:13:34.511 "bdev_name": "Malloc2", 00:13:34.511 "name": "Malloc2", 00:13:34.511 "nguid": "701B5BBDAF6B4C88B2692C56818C5396", 00:13:34.511 "uuid": "701b5bbd-af6b-4c88-b269-2c56818c5396" 00:13:34.511 } 00:13:34.511 ] 00:13:34.511 } 00:13:34.511 ] 00:13:34.511 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2301069 00:13:34.511 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:34.511 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:34.511 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:13:34.512 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:13:34.512 [2024-11-20 08:57:50.514847] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:13:34.512 [2024-11-20 08:57:50.514894] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2301259 ] 00:13:34.771 [2024-11-20 08:57:50.555771] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:13:34.771 [2024-11-20 08:57:50.560159] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:34.771 [2024-11-20 08:57:50.560182] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f6c90d03000 00:13:34.771 [2024-11-20 08:57:50.561159] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:34.771 [2024-11-20 08:57:50.562169] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:34.771 [2024-11-20 08:57:50.563175] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:34.771 [2024-11-20 08:57:50.564182] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:34.771 [2024-11-20 08:57:50.565189] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:34.771 [2024-11-20 08:57:50.566191] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:34.771 [2024-11-20 08:57:50.567196] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:13:34.771 [2024-11-20 08:57:50.568200] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:13:34.771 [2024-11-20 08:57:50.569206] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:13:34.771 [2024-11-20 08:57:50.569216] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6c90cf8000 00:13:34.771 [2024-11-20 08:57:50.570156] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:34.771 [2024-11-20 08:57:50.584184] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:13:34.771 [2024-11-20 08:57:50.584218] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:13:34.771 [2024-11-20 08:57:50.586279] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:34.771 [2024-11-20 08:57:50.586319] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:13:34.771 [2024-11-20 08:57:50.586381] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:13:34.771 [2024-11-20 08:57:50.586394] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:13:34.771 [2024-11-20 08:57:50.586398] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:13:34.771 [2024-11-20 08:57:50.587282] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:13:34.771 [2024-11-20 08:57:50.587293] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:13:34.771 [2024-11-20 08:57:50.587299] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:13:34.771 [2024-11-20 08:57:50.591953] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:13:34.771 [2024-11-20 08:57:50.591962] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:13:34.771 [2024-11-20 08:57:50.591969] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:13:34.771 [2024-11-20 08:57:50.592321] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:13:34.771 [2024-11-20 08:57:50.592329] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:13:34.771 [2024-11-20 08:57:50.593328] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:13:34.771 [2024-11-20 08:57:50.593336] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:13:34.771 [2024-11-20 08:57:50.593341] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:13:34.771 [2024-11-20 08:57:50.593347] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:13:34.771 [2024-11-20 08:57:50.593457] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:13:34.771 [2024-11-20 08:57:50.593462] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:13:34.771 [2024-11-20 08:57:50.593467] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:13:34.771 [2024-11-20 08:57:50.594333] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:13:34.771 [2024-11-20 08:57:50.595340] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:13:34.771 [2024-11-20 08:57:50.596353] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:34.771 [2024-11-20 08:57:50.597357] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:34.771 [2024-11-20 08:57:50.597399] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:13:34.771 [2024-11-20 08:57:50.598369] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:13:34.771 [2024-11-20 08:57:50.598378] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:13:34.771 [2024-11-20 08:57:50.598382] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:13:34.771 [2024-11-20 08:57:50.598399] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:13:34.771 [2024-11-20 08:57:50.598407] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:13:34.771 [2024-11-20 08:57:50.598418] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:34.771 [2024-11-20 08:57:50.598422] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:34.771 [2024-11-20 08:57:50.598426] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:34.771 [2024-11-20 08:57:50.598436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:34.771 [2024-11-20 08:57:50.602954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:13:34.771 [2024-11-20 08:57:50.602964] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:13:34.771 [2024-11-20 08:57:50.602969] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:13:34.771 [2024-11-20 08:57:50.602973] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:13:34.771 [2024-11-20 08:57:50.602977] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:13:34.771 [2024-11-20 08:57:50.602984] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:13:34.771 [2024-11-20 08:57:50.602988] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:13:34.771 [2024-11-20 08:57:50.602993] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:13:34.771 [2024-11-20 08:57:50.603004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:13:34.771 [2024-11-20 08:57:50.603014] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:13:34.771 [2024-11-20 08:57:50.610952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:13:34.771 [2024-11-20 08:57:50.610964] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:13:34.771 [2024-11-20 08:57:50.610971] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:13:34.771 [2024-11-20 08:57:50.610979] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:13:34.771 [2024-11-20 08:57:50.610986] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:13:34.771 [2024-11-20 08:57:50.610990] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:13:34.771 [2024-11-20 08:57:50.610996] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:13:34.771 [2024-11-20 08:57:50.611005] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:13:34.771 [2024-11-20 08:57:50.618951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:13:34.771 [2024-11-20 08:57:50.618961] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:13:34.771 [2024-11-20 08:57:50.618967] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:13:34.771 [2024-11-20 08:57:50.618972] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:13:34.771 [2024-11-20 08:57:50.618978] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:13:34.771 [2024-11-20 08:57:50.618986] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:34.771 [2024-11-20 08:57:50.626952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:13:34.771 [2024-11-20 08:57:50.627010] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:13:34.772 [2024-11-20 08:57:50.627018] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:13:34.772 [2024-11-20 08:57:50.627025] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:13:34.772 [2024-11-20 08:57:50.627030] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:13:34.772 [2024-11-20 08:57:50.627033] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:34.772 [2024-11-20 08:57:50.627039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:13:34.772 [2024-11-20 08:57:50.634954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:13:34.772 [2024-11-20 08:57:50.634965] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:13:34.772 [2024-11-20 08:57:50.634975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:13:34.772 [2024-11-20 08:57:50.634982] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:13:34.772 [2024-11-20 08:57:50.634989] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:34.772 [2024-11-20 08:57:50.634993] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:34.772 [2024-11-20 08:57:50.634996] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:34.772 [2024-11-20 08:57:50.635002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:34.772 [2024-11-20 08:57:50.642954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:13:34.772 [2024-11-20 08:57:50.642968] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:13:34.772 [2024-11-20 08:57:50.642975] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:13:34.772 [2024-11-20 08:57:50.642982] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:13:34.772 [2024-11-20 08:57:50.642986] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:34.772 [2024-11-20 08:57:50.642989] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:34.772 [2024-11-20 08:57:50.642995] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:34.772 [2024-11-20 08:57:50.650954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:13:34.772 [2024-11-20 08:57:50.650965] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:13:34.772 [2024-11-20 08:57:50.650971] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:13:34.772 [2024-11-20 08:57:50.650979] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:13:34.772 [2024-11-20 08:57:50.650985] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:13:34.772 [2024-11-20 08:57:50.650989] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:13:34.772 [2024-11-20 08:57:50.650994] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:13:34.772 [2024-11-20 08:57:50.650998] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:13:34.772 [2024-11-20 08:57:50.651003] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:13:34.772 [2024-11-20 08:57:50.651007] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:13:34.772 [2024-11-20 08:57:50.651022] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:13:34.772 [2024-11-20 08:57:50.658954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:13:34.772 [2024-11-20 08:57:50.658970] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:13:34.772 [2024-11-20 08:57:50.666952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:13:34.772 [2024-11-20 08:57:50.666965] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:13:34.772 [2024-11-20 08:57:50.674955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:13:34.772 [2024-11-20 08:57:50.674967] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:13:34.772 [2024-11-20 08:57:50.682954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:13:34.772 [2024-11-20 08:57:50.682970] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:13:34.772 [2024-11-20 08:57:50.682975] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:13:34.772 [2024-11-20 08:57:50.682978] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:13:34.772 [2024-11-20 08:57:50.682982] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:13:34.772 [2024-11-20 08:57:50.682985] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:13:34.772 [2024-11-20 08:57:50.682992] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:13:34.772 [2024-11-20 08:57:50.682999] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:13:34.772 [2024-11-20 08:57:50.683003] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:13:34.772 [2024-11-20 08:57:50.683006] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:34.772 [2024-11-20 08:57:50.683012] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:13:34.772 [2024-11-20 08:57:50.683018] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:13:34.772 [2024-11-20 08:57:50.683023] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:13:34.772 [2024-11-20 08:57:50.683026] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:34.772 [2024-11-20 08:57:50.683031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:13:34.772 [2024-11-20 08:57:50.683039] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:13:34.772 [2024-11-20 08:57:50.683043] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:13:34.772 [2024-11-20 08:57:50.683046] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:13:34.772 [2024-11-20 08:57:50.683051] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:13:34.772 [2024-11-20 08:57:50.690955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:13:34.772 [2024-11-20 08:57:50.690969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:13:34.772 [2024-11-20 08:57:50.690978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:13:34.772 [2024-11-20 08:57:50.690985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:13:34.772 ===================================================== 00:13:34.772 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:34.772 ===================================================== 00:13:34.772 Controller Capabilities/Features 00:13:34.772 ================================ 00:13:34.772 Vendor ID: 4e58 00:13:34.772 Subsystem Vendor ID: 4e58 00:13:34.772 Serial Number: SPDK2 00:13:34.772 Model Number: SPDK bdev Controller 00:13:34.772 Firmware Version: 25.01 00:13:34.772 Recommended Arb Burst: 6 00:13:34.772 IEEE OUI Identifier: 8d 6b 50 00:13:34.772 Multi-path I/O 00:13:34.772 May have multiple subsystem ports: Yes 00:13:34.772 May have multiple controllers: Yes 00:13:34.772 Associated with SR-IOV VF: No 00:13:34.772 Max Data Transfer Size: 131072 00:13:34.772 Max Number of Namespaces: 32 00:13:34.772 Max Number of I/O Queues: 127 00:13:34.772 NVMe Specification Version (VS): 1.3 00:13:34.772 NVMe Specification Version (Identify): 1.3 00:13:34.772 Maximum Queue Entries: 256 00:13:34.772 Contiguous Queues Required: Yes 00:13:34.772 Arbitration Mechanisms Supported 00:13:34.772 Weighted Round Robin: Not Supported 00:13:34.772 Vendor Specific: Not Supported 00:13:34.772 Reset Timeout: 15000 ms 00:13:34.772 Doorbell Stride: 4 bytes 00:13:34.772 NVM Subsystem Reset: Not Supported 00:13:34.772 Command Sets Supported 00:13:34.772 NVM Command Set: Supported 00:13:34.772 Boot Partition: Not Supported 00:13:34.772 Memory Page Size Minimum: 4096 bytes 00:13:34.772 Memory Page Size Maximum: 4096 bytes 00:13:34.772 Persistent Memory Region: Not Supported 00:13:34.772 Optional Asynchronous Events Supported 00:13:34.772 Namespace Attribute Notices: Supported 00:13:34.772 Firmware Activation Notices: Not Supported 00:13:34.772 ANA Change Notices: Not Supported 00:13:34.772 PLE Aggregate Log Change Notices: Not Supported 00:13:34.772 LBA Status Info Alert Notices: Not Supported 00:13:34.772 EGE Aggregate Log Change Notices: Not Supported 00:13:34.772 Normal NVM Subsystem Shutdown event: Not Supported 00:13:34.772 Zone Descriptor Change Notices: Not Supported 00:13:34.772 Discovery Log Change Notices: Not Supported 00:13:34.772 Controller Attributes 00:13:34.772 128-bit Host Identifier: Supported 00:13:34.772 Non-Operational Permissive Mode: Not Supported 00:13:34.772 NVM Sets: Not Supported 00:13:34.772 Read Recovery Levels: Not Supported 00:13:34.772 Endurance Groups: Not Supported 00:13:34.772 Predictable Latency Mode: Not Supported 00:13:34.772 Traffic Based Keep ALive: Not Supported 00:13:34.773 Namespace Granularity: Not Supported 00:13:34.773 SQ Associations: Not Supported 00:13:34.773 UUID List: Not Supported 00:13:34.773 Multi-Domain Subsystem: Not Supported 00:13:34.773 Fixed Capacity Management: Not Supported 00:13:34.773 Variable Capacity Management: Not Supported 00:13:34.773 Delete Endurance Group: Not Supported 00:13:34.773 Delete NVM Set: Not Supported 00:13:34.773 Extended LBA Formats Supported: Not Supported 00:13:34.773 Flexible Data Placement Supported: Not Supported 00:13:34.773 00:13:34.773 Controller Memory Buffer Support 00:13:34.773 ================================ 00:13:34.773 Supported: No 00:13:34.773 00:13:34.773 Persistent Memory Region Support 00:13:34.773 ================================ 00:13:34.773 Supported: No 00:13:34.773 00:13:34.773 Admin Command Set Attributes 00:13:34.773 ============================ 00:13:34.773 Security Send/Receive: Not Supported 00:13:34.773 Format NVM: Not Supported 00:13:34.773 Firmware Activate/Download: Not Supported 00:13:34.773 Namespace Management: Not Supported 00:13:34.773 Device Self-Test: Not Supported 00:13:34.773 Directives: Not Supported 00:13:34.773 NVMe-MI: Not Supported 00:13:34.773 Virtualization Management: Not Supported 00:13:34.773 Doorbell Buffer Config: Not Supported 00:13:34.773 Get LBA Status Capability: Not Supported 00:13:34.773 Command & Feature Lockdown Capability: Not Supported 00:13:34.773 Abort Command Limit: 4 00:13:34.773 Async Event Request Limit: 4 00:13:34.773 Number of Firmware Slots: N/A 00:13:34.773 Firmware Slot 1 Read-Only: N/A 00:13:34.773 Firmware Activation Without Reset: N/A 00:13:34.773 Multiple Update Detection Support: N/A 00:13:34.773 Firmware Update Granularity: No Information Provided 00:13:34.773 Per-Namespace SMART Log: No 00:13:34.773 Asymmetric Namespace Access Log Page: Not Supported 00:13:34.773 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:13:34.773 Command Effects Log Page: Supported 00:13:34.773 Get Log Page Extended Data: Supported 00:13:34.773 Telemetry Log Pages: Not Supported 00:13:34.773 Persistent Event Log Pages: Not Supported 00:13:34.773 Supported Log Pages Log Page: May Support 00:13:34.773 Commands Supported & Effects Log Page: Not Supported 00:13:34.773 Feature Identifiers & Effects Log Page:May Support 00:13:34.773 NVMe-MI Commands & Effects Log Page: May Support 00:13:34.773 Data Area 4 for Telemetry Log: Not Supported 00:13:34.773 Error Log Page Entries Supported: 128 00:13:34.773 Keep Alive: Supported 00:13:34.773 Keep Alive Granularity: 10000 ms 00:13:34.773 00:13:34.773 NVM Command Set Attributes 00:13:34.773 ========================== 00:13:34.773 Submission Queue Entry Size 00:13:34.773 Max: 64 00:13:34.773 Min: 64 00:13:34.773 Completion Queue Entry Size 00:13:34.773 Max: 16 00:13:34.773 Min: 16 00:13:34.773 Number of Namespaces: 32 00:13:34.773 Compare Command: Supported 00:13:34.773 Write Uncorrectable Command: Not Supported 00:13:34.773 Dataset Management Command: Supported 00:13:34.773 Write Zeroes Command: Supported 00:13:34.773 Set Features Save Field: Not Supported 00:13:34.773 Reservations: Not Supported 00:13:34.773 Timestamp: Not Supported 00:13:34.773 Copy: Supported 00:13:34.773 Volatile Write Cache: Present 00:13:34.773 Atomic Write Unit (Normal): 1 00:13:34.773 Atomic Write Unit (PFail): 1 00:13:34.773 Atomic Compare & Write Unit: 1 00:13:34.773 Fused Compare & Write: Supported 00:13:34.773 Scatter-Gather List 00:13:34.773 SGL Command Set: Supported (Dword aligned) 00:13:34.773 SGL Keyed: Not Supported 00:13:34.773 SGL Bit Bucket Descriptor: Not Supported 00:13:34.773 SGL Metadata Pointer: Not Supported 00:13:34.773 Oversized SGL: Not Supported 00:13:34.773 SGL Metadata Address: Not Supported 00:13:34.773 SGL Offset: Not Supported 00:13:34.773 Transport SGL Data Block: Not Supported 00:13:34.773 Replay Protected Memory Block: Not Supported 00:13:34.773 00:13:34.773 Firmware Slot Information 00:13:34.773 ========================= 00:13:34.773 Active slot: 1 00:13:34.773 Slot 1 Firmware Revision: 25.01 00:13:34.773 00:13:34.773 00:13:34.773 Commands Supported and Effects 00:13:34.773 ============================== 00:13:34.773 Admin Commands 00:13:34.773 -------------- 00:13:34.773 Get Log Page (02h): Supported 00:13:34.773 Identify (06h): Supported 00:13:34.773 Abort (08h): Supported 00:13:34.773 Set Features (09h): Supported 00:13:34.773 Get Features (0Ah): Supported 00:13:34.773 Asynchronous Event Request (0Ch): Supported 00:13:34.773 Keep Alive (18h): Supported 00:13:34.773 I/O Commands 00:13:34.773 ------------ 00:13:34.773 Flush (00h): Supported LBA-Change 00:13:34.773 Write (01h): Supported LBA-Change 00:13:34.773 Read (02h): Supported 00:13:34.773 Compare (05h): Supported 00:13:34.773 Write Zeroes (08h): Supported LBA-Change 00:13:34.773 Dataset Management (09h): Supported LBA-Change 00:13:34.773 Copy (19h): Supported LBA-Change 00:13:34.773 00:13:34.773 Error Log 00:13:34.773 ========= 00:13:34.773 00:13:34.773 Arbitration 00:13:34.773 =========== 00:13:34.773 Arbitration Burst: 1 00:13:34.773 00:13:34.773 Power Management 00:13:34.773 ================ 00:13:34.773 Number of Power States: 1 00:13:34.773 Current Power State: Power State #0 00:13:34.773 Power State #0: 00:13:34.773 Max Power: 0.00 W 00:13:34.773 Non-Operational State: Operational 00:13:34.773 Entry Latency: Not Reported 00:13:34.773 Exit Latency: Not Reported 00:13:34.773 Relative Read Throughput: 0 00:13:34.773 Relative Read Latency: 0 00:13:34.773 Relative Write Throughput: 0 00:13:34.773 Relative Write Latency: 0 00:13:34.773 Idle Power: Not Reported 00:13:34.773 Active Power: Not Reported 00:13:34.773 Non-Operational Permissive Mode: Not Supported 00:13:34.773 00:13:34.773 Health Information 00:13:34.773 ================== 00:13:34.773 Critical Warnings: 00:13:34.773 Available Spare Space: OK 00:13:34.773 Temperature: OK 00:13:34.773 Device Reliability: OK 00:13:34.773 Read Only: No 00:13:34.773 Volatile Memory Backup: OK 00:13:34.773 Current Temperature: 0 Kelvin (-273 Celsius) 00:13:34.773 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:13:34.773 Available Spare: 0% 00:13:34.773 Available Sp[2024-11-20 08:57:50.691078] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:13:34.773 [2024-11-20 08:57:50.698954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:13:34.773 [2024-11-20 08:57:50.698985] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:13:34.773 [2024-11-20 08:57:50.698993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:34.773 [2024-11-20 08:57:50.698999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:34.773 [2024-11-20 08:57:50.699005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:34.773 [2024-11-20 08:57:50.699010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:13:34.773 [2024-11-20 08:57:50.699067] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:13:34.773 [2024-11-20 08:57:50.699077] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:13:34.773 [2024-11-20 08:57:50.700065] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:34.773 [2024-11-20 08:57:50.700109] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:13:34.773 [2024-11-20 08:57:50.700115] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:13:34.773 [2024-11-20 08:57:50.701068] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:13:34.773 [2024-11-20 08:57:50.701079] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:13:34.773 [2024-11-20 08:57:50.701125] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:13:34.773 [2024-11-20 08:57:50.704107] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:13:34.773 are Threshold: 0% 00:13:34.773 Life Percentage Used: 0% 00:13:34.773 Data Units Read: 0 00:13:34.773 Data Units Written: 0 00:13:34.773 Host Read Commands: 0 00:13:34.773 Host Write Commands: 0 00:13:34.773 Controller Busy Time: 0 minutes 00:13:34.773 Power Cycles: 0 00:13:34.773 Power On Hours: 0 hours 00:13:34.773 Unsafe Shutdowns: 0 00:13:34.773 Unrecoverable Media Errors: 0 00:13:34.773 Lifetime Error Log Entries: 0 00:13:34.773 Warning Temperature Time: 0 minutes 00:13:34.773 Critical Temperature Time: 0 minutes 00:13:34.773 00:13:34.773 Number of Queues 00:13:34.773 ================ 00:13:34.773 Number of I/O Submission Queues: 127 00:13:34.773 Number of I/O Completion Queues: 127 00:13:34.773 00:13:34.774 Active Namespaces 00:13:34.774 ================= 00:13:34.774 Namespace ID:1 00:13:34.774 Error Recovery Timeout: Unlimited 00:13:34.774 Command Set Identifier: NVM (00h) 00:13:34.774 Deallocate: Supported 00:13:34.774 Deallocated/Unwritten Error: Not Supported 00:13:34.774 Deallocated Read Value: Unknown 00:13:34.774 Deallocate in Write Zeroes: Not Supported 00:13:34.774 Deallocated Guard Field: 0xFFFF 00:13:34.774 Flush: Supported 00:13:34.774 Reservation: Supported 00:13:34.774 Namespace Sharing Capabilities: Multiple Controllers 00:13:34.774 Size (in LBAs): 131072 (0GiB) 00:13:34.774 Capacity (in LBAs): 131072 (0GiB) 00:13:34.774 Utilization (in LBAs): 131072 (0GiB) 00:13:34.774 NGUID: 701B5BBDAF6B4C88B2692C56818C5396 00:13:34.774 UUID: 701b5bbd-af6b-4c88-b269-2c56818c5396 00:13:34.774 Thin Provisioning: Not Supported 00:13:34.774 Per-NS Atomic Units: Yes 00:13:34.774 Atomic Boundary Size (Normal): 0 00:13:34.774 Atomic Boundary Size (PFail): 0 00:13:34.774 Atomic Boundary Offset: 0 00:13:34.774 Maximum Single Source Range Length: 65535 00:13:34.774 Maximum Copy Length: 65535 00:13:34.774 Maximum Source Range Count: 1 00:13:34.774 NGUID/EUI64 Never Reused: No 00:13:34.774 Namespace Write Protected: No 00:13:34.774 Number of LBA Formats: 1 00:13:34.774 Current LBA Format: LBA Format #00 00:13:34.774 LBA Format #00: Data Size: 512 Metadata Size: 0 00:13:34.774 00:13:34.774 08:57:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:13:35.030 [2024-11-20 08:57:50.919511] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:40.286 Initializing NVMe Controllers 00:13:40.286 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:40.286 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:40.286 Initialization complete. Launching workers. 00:13:40.286 ======================================================== 00:13:40.286 Latency(us) 00:13:40.286 Device Information : IOPS MiB/s Average min max 00:13:40.286 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39942.10 156.02 3204.45 957.35 10605.69 00:13:40.286 ======================================================== 00:13:40.286 Total : 39942.10 156.02 3204.45 957.35 10605.69 00:13:40.286 00:13:40.286 [2024-11-20 08:57:56.022206] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:40.286 08:57:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:13:40.286 [2024-11-20 08:57:56.260871] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:45.539 Initializing NVMe Controllers 00:13:45.539 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:45.539 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:13:45.539 Initialization complete. Launching workers. 00:13:45.539 ======================================================== 00:13:45.539 Latency(us) 00:13:45.539 Device Information : IOPS MiB/s Average min max 00:13:45.539 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39950.59 156.06 3205.19 958.40 7067.48 00:13:45.539 ======================================================== 00:13:45.539 Total : 39950.59 156.06 3205.19 958.40 7067.48 00:13:45.539 00:13:45.539 [2024-11-20 08:58:01.285165] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:45.539 08:58:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:13:45.539 [2024-11-20 08:58:01.488631] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:50.793 [2024-11-20 08:58:06.631053] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:50.793 Initializing NVMe Controllers 00:13:50.793 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:50.793 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:50.793 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:50.793 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:50.793 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:50.793 Initialization complete. Launching workers. 00:13:50.793 Starting thread on core 2 00:13:50.793 Starting thread on core 3 00:13:50.793 Starting thread on core 1 00:13:50.793 08:58:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:51.050 [2024-11-20 08:58:06.934399] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:54.328 [2024-11-20 08:58:10.002209] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:54.328 Initializing NVMe Controllers 00:13:54.328 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:54.328 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:54.328 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:54.328 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:54.328 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:54.328 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:54.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:54.328 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:54.328 Initialization complete. Launching workers. 00:13:54.328 Starting thread on core 1 with urgent priority queue 00:13:54.328 Starting thread on core 2 with urgent priority queue 00:13:54.328 Starting thread on core 3 with urgent priority queue 00:13:54.328 Starting thread on core 0 with urgent priority queue 00:13:54.328 SPDK bdev Controller (SPDK2 ) core 0: 9200.00 IO/s 10.87 secs/100000 ios 00:13:54.328 SPDK bdev Controller (SPDK2 ) core 1: 7965.33 IO/s 12.55 secs/100000 ios 00:13:54.328 SPDK bdev Controller (SPDK2 ) core 2: 6695.33 IO/s 14.94 secs/100000 ios 00:13:54.328 SPDK bdev Controller (SPDK2 ) core 3: 6777.67 IO/s 14.75 secs/100000 ios 00:13:54.328 ======================================================== 00:13:54.328 00:13:54.328 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:54.328 [2024-11-20 08:58:10.286434] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:54.328 Initializing NVMe Controllers 00:13:54.328 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:54.328 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:54.328 Namespace ID: 1 size: 0GB 00:13:54.328 Initialization complete. 00:13:54.328 INFO: using host memory buffer for IO 00:13:54.328 Hello world! 00:13:54.328 [2024-11-20 08:58:10.299526] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:54.328 08:58:10 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:54.585 [2024-11-20 08:58:10.581856] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:55.956 Initializing NVMe Controllers 00:13:55.956 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:55.956 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:55.956 Initialization complete. Launching workers. 00:13:55.956 submit (in ns) avg, min, max = 7152.8, 3238.3, 4000299.1 00:13:55.956 complete (in ns) avg, min, max = 19683.2, 1767.8, 4019740.0 00:13:55.956 00:13:55.956 Submit histogram 00:13:55.956 ================ 00:13:55.956 Range in us Cumulative Count 00:13:55.956 3.228 - 3.242: 0.0061% ( 1) 00:13:55.956 3.242 - 3.256: 0.0123% ( 1) 00:13:55.956 3.256 - 3.270: 0.0368% ( 4) 00:13:55.956 3.270 - 3.283: 0.0798% ( 7) 00:13:55.956 3.283 - 3.297: 0.6448% ( 92) 00:13:55.956 3.297 - 3.311: 3.0828% ( 397) 00:13:55.956 3.311 - 3.325: 6.5586% ( 566) 00:13:55.956 3.325 - 3.339: 10.4950% ( 641) 00:13:55.956 3.339 - 3.353: 15.2420% ( 773) 00:13:55.956 3.353 - 3.367: 20.9899% ( 936) 00:13:55.956 3.367 - 3.381: 26.5721% ( 909) 00:13:55.956 3.381 - 3.395: 31.7612% ( 845) 00:13:55.956 3.395 - 3.409: 37.2390% ( 892) 00:13:55.956 3.409 - 3.423: 42.1764% ( 804) 00:13:55.956 3.423 - 3.437: 46.4259% ( 692) 00:13:55.956 3.437 - 3.450: 51.4861% ( 824) 00:13:55.956 3.450 - 3.464: 58.0324% ( 1066) 00:13:55.956 3.464 - 3.478: 62.8408% ( 783) 00:13:55.956 3.478 - 3.492: 67.0290% ( 682) 00:13:55.956 3.492 - 3.506: 72.6541% ( 916) 00:13:55.956 3.506 - 3.520: 77.5362% ( 795) 00:13:55.956 3.520 - 3.534: 80.8339% ( 537) 00:13:55.956 3.534 - 3.548: 83.4193% ( 421) 00:13:55.956 3.548 - 3.562: 85.3230% ( 310) 00:13:55.956 3.562 - 3.590: 87.4846% ( 352) 00:13:55.956 3.590 - 3.617: 88.6146% ( 184) 00:13:55.956 3.617 - 3.645: 90.0577% ( 235) 00:13:55.956 3.645 - 3.673: 91.7772% ( 280) 00:13:55.956 3.673 - 3.701: 93.4414% ( 271) 00:13:55.956 3.701 - 3.729: 95.1486% ( 278) 00:13:55.956 3.729 - 3.757: 96.6532% ( 245) 00:13:55.956 3.757 - 3.784: 97.8261% ( 191) 00:13:55.956 3.784 - 3.812: 98.5937% ( 125) 00:13:55.956 3.812 - 3.840: 99.1280% ( 87) 00:13:55.956 3.840 - 3.868: 99.3920% ( 43) 00:13:55.956 3.868 - 3.896: 99.5640% ( 28) 00:13:55.956 3.896 - 3.923: 99.6315% ( 11) 00:13:55.956 3.923 - 3.951: 99.6500% ( 3) 00:13:55.956 3.951 - 3.979: 99.6561% ( 1) 00:13:55.956 4.035 - 4.063: 99.6622% ( 1) 00:13:55.956 4.202 - 4.230: 99.6684% ( 1) 00:13:55.956 5.315 - 5.343: 99.6745% ( 1) 00:13:55.956 5.370 - 5.398: 99.6807% ( 1) 00:13:55.956 5.426 - 5.454: 99.6868% ( 1) 00:13:55.956 5.482 - 5.510: 99.6930% ( 1) 00:13:55.956 5.537 - 5.565: 99.6991% ( 1) 00:13:55.956 5.593 - 5.621: 99.7052% ( 1) 00:13:55.956 5.621 - 5.649: 99.7175% ( 2) 00:13:55.956 5.760 - 5.788: 99.7237% ( 1) 00:13:55.956 5.871 - 5.899: 99.7298% ( 1) 00:13:55.956 6.261 - 6.289: 99.7359% ( 1) 00:13:55.956 6.483 - 6.511: 99.7421% ( 1) 00:13:55.956 6.623 - 6.650: 99.7482% ( 1) 00:13:55.956 6.734 - 6.762: 99.7544% ( 1) 00:13:55.956 7.040 - 7.068: 99.7605% ( 1) 00:13:55.956 7.068 - 7.096: 99.7666% ( 1) 00:13:55.956 7.290 - 7.346: 99.7789% ( 2) 00:13:55.956 7.624 - 7.680: 99.7851% ( 1) 00:13:55.956 7.736 - 7.791: 99.7912% ( 1) 00:13:55.956 7.847 - 7.903: 99.7973% ( 1) 00:13:55.956 7.903 - 7.958: 99.8035% ( 1) 00:13:55.956 8.070 - 8.125: 99.8096% ( 1) 00:13:55.956 8.181 - 8.237: 99.8219% ( 2) 00:13:55.956 8.348 - 8.403: 99.8281% ( 1) 00:13:55.956 8.459 - 8.515: 99.8342% ( 1) 00:13:55.956 8.515 - 8.570: 99.8403% ( 1) 00:13:55.956 8.570 - 8.626: 99.8526% ( 2) 00:13:55.956 8.682 - 8.737: 99.8588% ( 1) 00:13:55.956 8.904 - 8.960: 99.8649% ( 1) 00:13:55.956 9.016 - 9.071: 99.8710% ( 1) 00:13:55.956 9.071 - 9.127: 99.8772% ( 1) 00:13:55.956 9.461 - 9.517: 99.8833% ( 1) 00:13:55.956 9.517 - 9.572: 99.8895% ( 1) 00:13:55.956 9.850 - 9.906: 99.8956% ( 1) 00:13:55.956 14.080 - 14.136: 99.9017% ( 1) 00:13:55.956 15.694 - 15.805: 99.9079% ( 1) 00:13:55.957 3989.148 - 4017.642: 100.0000% ( 15) 00:13:55.957 00:13:55.957 Complete histogram 00:13:55.957 ================== 00:13:55.957 Range in us Cumulative Count 00:13:55.957 1.767 - [2024-11-20 08:58:11.677024] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:55.957 1.774: 0.0061% ( 1) 00:13:55.957 1.781 - 1.795: 0.0123% ( 1) 00:13:55.957 1.795 - 1.809: 0.0184% ( 1) 00:13:55.957 1.809 - 1.823: 0.0676% ( 8) 00:13:55.957 1.823 - 1.837: 0.4974% ( 70) 00:13:55.957 1.837 - 1.850: 1.4616% ( 157) 00:13:55.957 1.850 - 1.864: 2.2967% ( 136) 00:13:55.957 1.864 - 1.878: 12.0241% ( 1584) 00:13:55.957 1.878 - 1.892: 47.3225% ( 5748) 00:13:55.957 1.892 - 1.906: 78.8995% ( 5142) 00:13:55.957 1.906 - 1.920: 91.9799% ( 2130) 00:13:55.957 1.920 - 1.934: 95.0626% ( 502) 00:13:55.957 1.934 - 1.948: 96.3277% ( 206) 00:13:55.957 1.948 - 1.962: 97.4146% ( 177) 00:13:55.957 1.962 - 1.976: 98.3296% ( 149) 00:13:55.957 1.976 - 1.990: 98.9253% ( 97) 00:13:55.957 1.990 - 2.003: 99.1832% ( 42) 00:13:55.957 2.003 - 2.017: 99.2201% ( 6) 00:13:55.957 2.017 - 2.031: 99.2324% ( 2) 00:13:55.957 2.031 - 2.045: 99.2569% ( 4) 00:13:55.957 2.045 - 2.059: 99.2754% ( 3) 00:13:55.957 2.059 - 2.073: 99.2876% ( 2) 00:13:55.957 2.087 - 2.101: 99.2938% ( 1) 00:13:55.957 2.101 - 2.115: 99.2999% ( 1) 00:13:55.957 2.129 - 2.143: 99.3061% ( 1) 00:13:55.957 2.198 - 2.212: 99.3122% ( 1) 00:13:55.957 2.226 - 2.240: 99.3245% ( 2) 00:13:55.957 2.310 - 2.323: 99.3306% ( 1) 00:13:55.957 2.365 - 2.379: 99.3368% ( 1) 00:13:55.957 3.729 - 3.757: 99.3429% ( 1) 00:13:55.957 3.757 - 3.784: 99.3491% ( 1) 00:13:55.957 3.840 - 3.868: 99.3552% ( 1) 00:13:55.957 3.896 - 3.923: 99.3613% ( 1) 00:13:55.957 3.951 - 3.979: 99.3675% ( 1) 00:13:55.957 4.619 - 4.647: 99.3736% ( 1) 00:13:55.957 4.870 - 4.897: 99.3859% ( 2) 00:13:55.957 4.897 - 4.925: 99.3920% ( 1) 00:13:55.957 4.953 - 4.981: 99.3982% ( 1) 00:13:55.957 4.981 - 5.009: 99.4043% ( 1) 00:13:55.957 5.092 - 5.120: 99.4105% ( 1) 00:13:55.957 5.203 - 5.231: 99.4166% ( 1) 00:13:55.957 5.287 - 5.315: 99.4227% ( 1) 00:13:55.957 5.343 - 5.370: 99.4289% ( 1) 00:13:55.957 5.454 - 5.482: 99.4350% ( 1) 00:13:55.957 5.593 - 5.621: 99.4412% ( 1) 00:13:55.957 5.621 - 5.649: 99.4473% ( 1) 00:13:55.957 5.899 - 5.927: 99.4535% ( 1) 00:13:55.957 6.150 - 6.177: 99.4596% ( 1) 00:13:55.957 6.261 - 6.289: 99.4657% ( 1) 00:13:55.957 6.344 - 6.372: 99.4719% ( 1) 00:13:55.957 6.428 - 6.456: 99.4780% ( 1) 00:13:55.957 6.483 - 6.511: 99.4842% ( 1) 00:13:55.957 6.567 - 6.595: 99.4903% ( 1) 00:13:55.957 6.678 - 6.706: 99.4964% ( 1) 00:13:55.957 6.901 - 6.929: 99.5026% ( 1) 00:13:55.957 6.957 - 6.984: 99.5087% ( 1) 00:13:55.957 7.290 - 7.346: 99.5149% ( 1) 00:13:55.957 7.457 - 7.513: 99.5210% ( 1) 00:13:55.957 7.569 - 7.624: 99.5271% ( 1) 00:13:55.957 7.624 - 7.680: 99.5333% ( 1) 00:13:55.957 7.847 - 7.903: 99.5394% ( 1) 00:13:55.957 39.179 - 39.402: 99.5456% ( 1) 00:13:55.957 143.360 - 144.250: 99.5517% ( 1) 00:13:55.957 2478.970 - 2493.217: 99.5578% ( 1) 00:13:55.957 3177.071 - 3191.318: 99.5640% ( 1) 00:13:55.957 3989.148 - 4017.642: 99.9939% ( 70) 00:13:55.957 4017.642 - 4046.136: 100.0000% ( 1) 00:13:55.957 00:13:55.957 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:55.957 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:55.957 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:55.957 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:55.957 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:55.957 [ 00:13:55.957 { 00:13:55.957 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:55.957 "subtype": "Discovery", 00:13:55.957 "listen_addresses": [], 00:13:55.957 "allow_any_host": true, 00:13:55.957 "hosts": [] 00:13:55.957 }, 00:13:55.957 { 00:13:55.957 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:55.957 "subtype": "NVMe", 00:13:55.957 "listen_addresses": [ 00:13:55.957 { 00:13:55.957 "trtype": "VFIOUSER", 00:13:55.957 "adrfam": "IPv4", 00:13:55.957 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:55.957 "trsvcid": "0" 00:13:55.957 } 00:13:55.957 ], 00:13:55.957 "allow_any_host": true, 00:13:55.957 "hosts": [], 00:13:55.957 "serial_number": "SPDK1", 00:13:55.957 "model_number": "SPDK bdev Controller", 00:13:55.957 "max_namespaces": 32, 00:13:55.957 "min_cntlid": 1, 00:13:55.957 "max_cntlid": 65519, 00:13:55.957 "namespaces": [ 00:13:55.957 { 00:13:55.957 "nsid": 1, 00:13:55.957 "bdev_name": "Malloc1", 00:13:55.957 "name": "Malloc1", 00:13:55.957 "nguid": "2A2F796BCCFD4A968ADAEA6B1F3D2F51", 00:13:55.957 "uuid": "2a2f796b-ccfd-4a96-8ada-ea6b1f3d2f51" 00:13:55.957 }, 00:13:55.957 { 00:13:55.957 "nsid": 2, 00:13:55.957 "bdev_name": "Malloc3", 00:13:55.957 "name": "Malloc3", 00:13:55.957 "nguid": "54FDABB8E31C473D96F93D319B03D835", 00:13:55.957 "uuid": "54fdabb8-e31c-473d-96f9-3d319b03d835" 00:13:55.957 } 00:13:55.957 ] 00:13:55.957 }, 00:13:55.957 { 00:13:55.957 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:55.957 "subtype": "NVMe", 00:13:55.958 "listen_addresses": [ 00:13:55.958 { 00:13:55.958 "trtype": "VFIOUSER", 00:13:55.958 "adrfam": "IPv4", 00:13:55.958 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:55.958 "trsvcid": "0" 00:13:55.958 } 00:13:55.958 ], 00:13:55.958 "allow_any_host": true, 00:13:55.958 "hosts": [], 00:13:55.958 "serial_number": "SPDK2", 00:13:55.958 "model_number": "SPDK bdev Controller", 00:13:55.958 "max_namespaces": 32, 00:13:55.958 "min_cntlid": 1, 00:13:55.958 "max_cntlid": 65519, 00:13:55.958 "namespaces": [ 00:13:55.958 { 00:13:55.958 "nsid": 1, 00:13:55.958 "bdev_name": "Malloc2", 00:13:55.958 "name": "Malloc2", 00:13:55.958 "nguid": "701B5BBDAF6B4C88B2692C56818C5396", 00:13:55.958 "uuid": "701b5bbd-af6b-4c88-b269-2c56818c5396" 00:13:55.958 } 00:13:55.958 ] 00:13:55.958 } 00:13:55.958 ] 00:13:55.958 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:55.958 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:55.958 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2304712 00:13:55.958 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:55.958 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:13:55.958 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:55.958 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:55.958 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:13:55.958 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:55.958 08:58:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:56.215 [2024-11-20 08:58:12.077419] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:56.215 Malloc4 00:13:56.215 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:56.473 [2024-11-20 08:58:12.373617] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:56.473 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:56.473 Asynchronous Event Request test 00:13:56.473 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:56.473 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:56.473 Registering asynchronous event callbacks... 00:13:56.473 Starting namespace attribute notice tests for all controllers... 00:13:56.473 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:56.473 aer_cb - Changed Namespace 00:13:56.473 Cleaning up... 00:13:56.730 [ 00:13:56.730 { 00:13:56.730 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:56.730 "subtype": "Discovery", 00:13:56.730 "listen_addresses": [], 00:13:56.730 "allow_any_host": true, 00:13:56.730 "hosts": [] 00:13:56.730 }, 00:13:56.730 { 00:13:56.730 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:56.730 "subtype": "NVMe", 00:13:56.730 "listen_addresses": [ 00:13:56.730 { 00:13:56.730 "trtype": "VFIOUSER", 00:13:56.730 "adrfam": "IPv4", 00:13:56.730 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:56.730 "trsvcid": "0" 00:13:56.730 } 00:13:56.730 ], 00:13:56.730 "allow_any_host": true, 00:13:56.730 "hosts": [], 00:13:56.730 "serial_number": "SPDK1", 00:13:56.730 "model_number": "SPDK bdev Controller", 00:13:56.730 "max_namespaces": 32, 00:13:56.730 "min_cntlid": 1, 00:13:56.730 "max_cntlid": 65519, 00:13:56.730 "namespaces": [ 00:13:56.730 { 00:13:56.730 "nsid": 1, 00:13:56.730 "bdev_name": "Malloc1", 00:13:56.730 "name": "Malloc1", 00:13:56.730 "nguid": "2A2F796BCCFD4A968ADAEA6B1F3D2F51", 00:13:56.730 "uuid": "2a2f796b-ccfd-4a96-8ada-ea6b1f3d2f51" 00:13:56.730 }, 00:13:56.730 { 00:13:56.730 "nsid": 2, 00:13:56.730 "bdev_name": "Malloc3", 00:13:56.730 "name": "Malloc3", 00:13:56.730 "nguid": "54FDABB8E31C473D96F93D319B03D835", 00:13:56.730 "uuid": "54fdabb8-e31c-473d-96f9-3d319b03d835" 00:13:56.730 } 00:13:56.730 ] 00:13:56.730 }, 00:13:56.730 { 00:13:56.730 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:56.730 "subtype": "NVMe", 00:13:56.730 "listen_addresses": [ 00:13:56.730 { 00:13:56.730 "trtype": "VFIOUSER", 00:13:56.730 "adrfam": "IPv4", 00:13:56.730 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:56.730 "trsvcid": "0" 00:13:56.730 } 00:13:56.730 ], 00:13:56.730 "allow_any_host": true, 00:13:56.730 "hosts": [], 00:13:56.730 "serial_number": "SPDK2", 00:13:56.730 "model_number": "SPDK bdev Controller", 00:13:56.730 "max_namespaces": 32, 00:13:56.730 "min_cntlid": 1, 00:13:56.730 "max_cntlid": 65519, 00:13:56.730 "namespaces": [ 00:13:56.730 { 00:13:56.730 "nsid": 1, 00:13:56.730 "bdev_name": "Malloc2", 00:13:56.730 "name": "Malloc2", 00:13:56.730 "nguid": "701B5BBDAF6B4C88B2692C56818C5396", 00:13:56.730 "uuid": "701b5bbd-af6b-4c88-b269-2c56818c5396" 00:13:56.730 }, 00:13:56.730 { 00:13:56.730 "nsid": 2, 00:13:56.730 "bdev_name": "Malloc4", 00:13:56.730 "name": "Malloc4", 00:13:56.730 "nguid": "B7D26CD243CA44B0A7E201FF65AB2458", 00:13:56.730 "uuid": "b7d26cd2-43ca-44b0-a7e2-01ff65ab2458" 00:13:56.730 } 00:13:56.730 ] 00:13:56.730 } 00:13:56.730 ] 00:13:56.730 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2304712 00:13:56.730 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:56.730 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2297091 00:13:56.730 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2297091 ']' 00:13:56.731 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2297091 00:13:56.731 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:13:56.731 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:56.731 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2297091 00:13:56.731 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:56.731 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:56.731 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2297091' 00:13:56.731 killing process with pid 2297091 00:13:56.731 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2297091 00:13:56.731 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2297091 00:13:56.989 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:56.989 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:56.989 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:56.989 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:56.989 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:56.989 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2304948 00:13:56.989 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2304948' 00:13:56.989 Process pid: 2304948 00:13:56.989 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:56.989 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:56.989 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2304948 00:13:56.989 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 2304948 ']' 00:13:56.989 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.989 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:56.989 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.989 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:56.989 08:58:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:56.989 [2024-11-20 08:58:12.951047] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:56.989 [2024-11-20 08:58:12.951962] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:13:56.989 [2024-11-20 08:58:12.952003] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:56.989 [2024-11-20 08:58:13.027334] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:57.248 [2024-11-20 08:58:13.065074] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:57.248 [2024-11-20 08:58:13.065114] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:57.248 [2024-11-20 08:58:13.065121] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:57.248 [2024-11-20 08:58:13.065126] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:57.248 [2024-11-20 08:58:13.065131] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:57.248 [2024-11-20 08:58:13.066594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:57.248 [2024-11-20 08:58:13.066698] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:57.248 [2024-11-20 08:58:13.066806] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:57.248 [2024-11-20 08:58:13.066807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:57.248 [2024-11-20 08:58:13.134597] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:57.248 [2024-11-20 08:58:13.135629] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:57.248 [2024-11-20 08:58:13.135714] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:57.248 [2024-11-20 08:58:13.136161] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:57.248 [2024-11-20 08:58:13.136186] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:57.248 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:57.248 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:13:57.248 08:58:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:58.186 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:58.445 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:58.445 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:58.445 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:58.445 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:58.445 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:58.705 Malloc1 00:13:58.705 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:58.962 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:58.962 08:58:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:59.221 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:59.221 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:59.221 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:59.478 Malloc2 00:13:59.478 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:59.735 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:59.994 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:59.994 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:59.994 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2304948 00:13:59.994 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 2304948 ']' 00:13:59.994 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 2304948 00:13:59.994 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:13:59.994 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:59.994 08:58:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2304948 00:14:00.253 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:00.253 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:00.253 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2304948' 00:14:00.253 killing process with pid 2304948 00:14:00.253 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 2304948 00:14:00.253 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 2304948 00:14:00.253 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:00.253 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:00.253 00:14:00.253 real 0m51.220s 00:14:00.253 user 3m18.412s 00:14:00.253 sys 0m3.176s 00:14:00.253 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.253 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:00.253 ************************************ 00:14:00.253 END TEST nvmf_vfio_user 00:14:00.253 ************************************ 00:14:00.253 08:58:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:00.253 08:58:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:00.253 08:58:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.253 08:58:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:00.513 ************************************ 00:14:00.513 START TEST nvmf_vfio_user_nvme_compliance 00:14:00.513 ************************************ 00:14:00.513 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:00.513 * Looking for test storage... 00:14:00.513 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:00.513 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:00.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.514 --rc genhtml_branch_coverage=1 00:14:00.514 --rc genhtml_function_coverage=1 00:14:00.514 --rc genhtml_legend=1 00:14:00.514 --rc geninfo_all_blocks=1 00:14:00.514 --rc geninfo_unexecuted_blocks=1 00:14:00.514 00:14:00.514 ' 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:00.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.514 --rc genhtml_branch_coverage=1 00:14:00.514 --rc genhtml_function_coverage=1 00:14:00.514 --rc genhtml_legend=1 00:14:00.514 --rc geninfo_all_blocks=1 00:14:00.514 --rc geninfo_unexecuted_blocks=1 00:14:00.514 00:14:00.514 ' 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:00.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.514 --rc genhtml_branch_coverage=1 00:14:00.514 --rc genhtml_function_coverage=1 00:14:00.514 --rc genhtml_legend=1 00:14:00.514 --rc geninfo_all_blocks=1 00:14:00.514 --rc geninfo_unexecuted_blocks=1 00:14:00.514 00:14:00.514 ' 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:00.514 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:00.514 --rc genhtml_branch_coverage=1 00:14:00.514 --rc genhtml_function_coverage=1 00:14:00.514 --rc genhtml_legend=1 00:14:00.514 --rc geninfo_all_blocks=1 00:14:00.514 --rc geninfo_unexecuted_blocks=1 00:14:00.514 00:14:00.514 ' 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:00.514 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@50 -- # : 0 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:14:00.515 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@54 -- # have_pci_nics=0 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2305576 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2305576' 00:14:00.515 Process pid: 2305576 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2305576 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 2305576 ']' 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:00.515 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:00.515 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:00.773 [2024-11-20 08:58:16.567223] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:14:00.773 [2024-11-20 08:58:16.567272] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:00.773 [2024-11-20 08:58:16.643557] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:00.773 [2024-11-20 08:58:16.686124] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:00.773 [2024-11-20 08:58:16.686160] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:00.773 [2024-11-20 08:58:16.686167] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:00.773 [2024-11-20 08:58:16.686176] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:00.773 [2024-11-20 08:58:16.686181] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:00.773 [2024-11-20 08:58:16.687512] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:00.773 [2024-11-20 08:58:16.687622] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:00.773 [2024-11-20 08:58:16.687623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:00.773 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:00.773 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:14:00.773 08:58:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:02.145 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:02.145 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:02.145 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:02.145 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.145 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:02.145 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.145 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:02.145 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:02.145 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.146 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:02.146 malloc0 00:14:02.146 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.146 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:02.146 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.146 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:02.146 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.146 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:02.146 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.146 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:02.146 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.146 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:02.146 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:02.146 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:02.146 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:02.146 08:58:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:02.146 00:14:02.146 00:14:02.146 CUnit - A unit testing framework for C - Version 2.1-3 00:14:02.146 http://cunit.sourceforge.net/ 00:14:02.146 00:14:02.146 00:14:02.146 Suite: nvme_compliance 00:14:02.146 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 08:58:18.025399] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:02.146 [2024-11-20 08:58:18.026748] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:02.146 [2024-11-20 08:58:18.026763] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:02.146 [2024-11-20 08:58:18.026769] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:02.146 [2024-11-20 08:58:18.030424] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:02.146 passed 00:14:02.146 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 08:58:18.107957] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:02.146 [2024-11-20 08:58:18.110974] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:02.146 passed 00:14:02.403 Test: admin_identify_ns ...[2024-11-20 08:58:18.190380] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:02.403 [2024-11-20 08:58:18.253962] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:02.403 [2024-11-20 08:58:18.261959] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:02.403 [2024-11-20 08:58:18.283055] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:02.403 passed 00:14:02.403 Test: admin_get_features_mandatory_features ...[2024-11-20 08:58:18.359154] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:02.403 [2024-11-20 08:58:18.362173] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:02.403 passed 00:14:02.403 Test: admin_get_features_optional_features ...[2024-11-20 08:58:18.440685] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:02.660 [2024-11-20 08:58:18.443708] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:02.660 passed 00:14:02.660 Test: admin_set_features_number_of_queues ...[2024-11-20 08:58:18.520424] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:02.660 [2024-11-20 08:58:18.633063] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:02.660 passed 00:14:02.917 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 08:58:18.713980] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:02.917 [2024-11-20 08:58:18.717020] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:02.917 passed 00:14:02.917 Test: admin_get_log_page_with_lpo ...[2024-11-20 08:58:18.792420] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:02.917 [2024-11-20 08:58:18.854960] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:02.917 [2024-11-20 08:58:18.868033] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:02.917 passed 00:14:02.917 Test: fabric_property_get ...[2024-11-20 08:58:18.944098] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:02.917 [2024-11-20 08:58:18.945333] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:02.917 [2024-11-20 08:58:18.947120] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.175 passed 00:14:03.175 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 08:58:19.027632] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.175 [2024-11-20 08:58:19.028877] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:03.175 [2024-11-20 08:58:19.030658] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.175 passed 00:14:03.175 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 08:58:19.106626] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.175 [2024-11-20 08:58:19.188956] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:03.175 [2024-11-20 08:58:19.204957] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:03.175 [2024-11-20 08:58:19.210038] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.432 passed 00:14:03.432 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 08:58:19.291008] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.432 [2024-11-20 08:58:19.292245] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:14:03.432 [2024-11-20 08:58:19.294031] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.432 passed 00:14:03.432 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 08:58:19.369405] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.432 [2024-11-20 08:58:19.448960] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:03.690 [2024-11-20 08:58:19.472960] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:03.690 [2024-11-20 08:58:19.478040] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.690 passed 00:14:03.690 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 08:58:19.551138] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.690 [2024-11-20 08:58:19.552381] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:14:03.690 [2024-11-20 08:58:19.552405] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:14:03.690 [2024-11-20 08:58:19.556169] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.690 passed 00:14:03.690 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 08:58:19.631054] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.690 [2024-11-20 08:58:19.723953] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:14:03.947 [2024-11-20 08:58:19.731959] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:14:03.947 [2024-11-20 08:58:19.739956] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:14:03.947 [2024-11-20 08:58:19.747957] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:14:03.947 [2024-11-20 08:58:19.777032] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.947 passed 00:14:03.947 Test: admin_create_io_sq_verify_pc ...[2024-11-20 08:58:19.853123] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:03.947 [2024-11-20 08:58:19.869963] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:14:03.947 [2024-11-20 08:58:19.887302] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:03.947 passed 00:14:03.947 Test: admin_create_io_qp_max_qps ...[2024-11-20 08:58:19.966824] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:05.367 [2024-11-20 08:58:21.055958] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:14:05.624 [2024-11-20 08:58:21.440312] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:05.624 passed 00:14:05.624 Test: admin_create_io_sq_shared_cq ...[2024-11-20 08:58:21.517329] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:05.624 [2024-11-20 08:58:21.649959] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:14:05.882 [2024-11-20 08:58:21.687013] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:05.882 passed 00:14:05.882 00:14:05.882 Run Summary: Type Total Ran Passed Failed Inactive 00:14:05.882 suites 1 1 n/a 0 0 00:14:05.882 tests 18 18 18 0 0 00:14:05.882 asserts 360 360 360 0 n/a 00:14:05.882 00:14:05.882 Elapsed time = 1.504 seconds 00:14:05.882 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2305576 00:14:05.882 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 2305576 ']' 00:14:05.882 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 2305576 00:14:05.882 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:14:05.882 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:05.882 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2305576 00:14:05.882 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:05.882 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:05.882 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2305576' 00:14:05.882 killing process with pid 2305576 00:14:05.882 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 2305576 00:14:05.883 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 2305576 00:14:06.141 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:14:06.141 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:14:06.141 00:14:06.141 real 0m5.655s 00:14:06.141 user 0m15.758s 00:14:06.141 sys 0m0.532s 00:14:06.141 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:06.141 08:58:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:06.141 ************************************ 00:14:06.141 END TEST nvmf_vfio_user_nvme_compliance 00:14:06.141 ************************************ 00:14:06.141 08:58:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:06.141 08:58:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:06.141 08:58:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:06.141 08:58:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:06.141 ************************************ 00:14:06.141 START TEST nvmf_vfio_user_fuzz 00:14:06.141 ************************************ 00:14:06.141 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:14:06.141 * Looking for test storage... 00:14:06.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:06.141 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:06.141 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:14:06.141 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:06.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.400 --rc genhtml_branch_coverage=1 00:14:06.400 --rc genhtml_function_coverage=1 00:14:06.400 --rc genhtml_legend=1 00:14:06.400 --rc geninfo_all_blocks=1 00:14:06.400 --rc geninfo_unexecuted_blocks=1 00:14:06.400 00:14:06.400 ' 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:06.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.400 --rc genhtml_branch_coverage=1 00:14:06.400 --rc genhtml_function_coverage=1 00:14:06.400 --rc genhtml_legend=1 00:14:06.400 --rc geninfo_all_blocks=1 00:14:06.400 --rc geninfo_unexecuted_blocks=1 00:14:06.400 00:14:06.400 ' 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:06.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.400 --rc genhtml_branch_coverage=1 00:14:06.400 --rc genhtml_function_coverage=1 00:14:06.400 --rc genhtml_legend=1 00:14:06.400 --rc geninfo_all_blocks=1 00:14:06.400 --rc geninfo_unexecuted_blocks=1 00:14:06.400 00:14:06.400 ' 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:06.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.400 --rc genhtml_branch_coverage=1 00:14:06.400 --rc genhtml_function_coverage=1 00:14:06.400 --rc genhtml_legend=1 00:14:06.400 --rc geninfo_all_blocks=1 00:14:06.400 --rc geninfo_unexecuted_blocks=1 00:14:06.400 00:14:06.400 ' 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:06.400 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@50 -- # : 0 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:14:06.401 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@54 -- # have_pci_nics=0 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2306640 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2306640' 00:14:06.401 Process pid: 2306640 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2306640 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 2306640 ']' 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.401 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:06.401 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:06.659 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:06.659 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:14:06.659 08:58:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:14:07.594 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:07.594 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.594 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:07.594 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.594 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:14:07.594 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:07.594 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.594 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:07.594 malloc0 00:14:07.594 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.594 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:14:07.594 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.594 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:07.594 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.594 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:07.594 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.594 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:07.594 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.594 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:07.594 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:07.594 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:07.594 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:07.594 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:14:07.594 08:58:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:14:39.660 Fuzzing completed. Shutting down the fuzz application 00:14:39.660 00:14:39.660 Dumping successful admin opcodes: 00:14:39.660 8, 9, 10, 24, 00:14:39.660 Dumping successful io opcodes: 00:14:39.660 0, 00:14:39.660 NS: 0x20000081ef00 I/O qp, Total commands completed: 1128264, total successful commands: 4442, random_seed: 3974652992 00:14:39.660 NS: 0x20000081ef00 admin qp, Total commands completed: 280703, total successful commands: 2261, random_seed: 764174464 00:14:39.660 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:14:39.660 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:39.660 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:39.660 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:39.660 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2306640 00:14:39.660 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 2306640 ']' 00:14:39.660 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 2306640 00:14:39.660 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:14:39.660 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:39.660 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2306640 00:14:39.660 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:39.660 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:39.660 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2306640' 00:14:39.660 killing process with pid 2306640 00:14:39.660 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 2306640 00:14:39.660 08:58:53 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 2306640 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:14:39.660 00:14:39.660 real 0m32.220s 00:14:39.660 user 0m34.139s 00:14:39.660 sys 0m27.420s 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:14:39.660 ************************************ 00:14:39.660 END TEST nvmf_vfio_user_fuzz 00:14:39.660 ************************************ 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:39.660 ************************************ 00:14:39.660 START TEST nvmf_auth_target 00:14:39.660 ************************************ 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:14:39.660 * Looking for test storage... 00:14:39.660 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:39.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.660 --rc genhtml_branch_coverage=1 00:14:39.660 --rc genhtml_function_coverage=1 00:14:39.660 --rc genhtml_legend=1 00:14:39.660 --rc geninfo_all_blocks=1 00:14:39.660 --rc geninfo_unexecuted_blocks=1 00:14:39.660 00:14:39.660 ' 00:14:39.660 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:39.660 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.660 --rc genhtml_branch_coverage=1 00:14:39.660 --rc genhtml_function_coverage=1 00:14:39.660 --rc genhtml_legend=1 00:14:39.660 --rc geninfo_all_blocks=1 00:14:39.661 --rc geninfo_unexecuted_blocks=1 00:14:39.661 00:14:39.661 ' 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:39.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.661 --rc genhtml_branch_coverage=1 00:14:39.661 --rc genhtml_function_coverage=1 00:14:39.661 --rc genhtml_legend=1 00:14:39.661 --rc geninfo_all_blocks=1 00:14:39.661 --rc geninfo_unexecuted_blocks=1 00:14:39.661 00:14:39.661 ' 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:39.661 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:39.661 --rc genhtml_branch_coverage=1 00:14:39.661 --rc genhtml_function_coverage=1 00:14:39.661 --rc genhtml_legend=1 00:14:39.661 --rc geninfo_all_blocks=1 00:14:39.661 --rc geninfo_unexecuted_blocks=1 00:14:39.661 00:14:39.661 ' 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@50 -- # : 0 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:14:39.661 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # remove_target_ns 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # xtrace_disable 00:14:39.661 08:58:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@131 -- # pci_devs=() 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@135 -- # net_devs=() 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@136 -- # e810=() 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@136 -- # local -ga e810 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@137 -- # x722=() 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@137 -- # local -ga x722 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@138 -- # mlx=() 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@138 -- # local -ga mlx 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:44.937 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:44.937 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:44.937 Found net devices under 0000:86:00.0: cvl_0_0 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:44.937 Found net devices under 0000:86:00.1: cvl_0_1 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # is_hw=yes 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@257 -- # create_target_ns 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:14:44.937 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@28 -- # local -g _dev 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # ips=() 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772161 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:14:44.938 10.0.0.1 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772162 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:14:44.938 10.0.0.2 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:14:44.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:44.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.451 ms 00:14:44.938 00:14:44.938 --- 10.0.0.1 ping statistics --- 00:14:44.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.938 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:14:44.938 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=target0 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:14:44.939 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:44.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.146 ms 00:14:44.939 00:14:44.939 --- 10.0.0.2 ping statistics --- 00:14:44.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:44.939 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # (( pair++ )) 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # return 0 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=initiator1 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # return 1 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev= 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@169 -- # return 0 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=target0 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev target1 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=target1 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # return 1 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev= 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@169 -- # return 0 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # nvmfpid=2315021 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # waitforlisten 2315021 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2315021 ']' 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.939 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=2315046 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=null 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=ed229447071d859827c8a7f455f0d54e08a51c1ccf1fbc1d 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.oyo 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key ed229447071d859827c8a7f455f0d54e08a51c1ccf1fbc1d 0 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 ed229447071d859827c8a7f455f0d54e08a51c1ccf1fbc1d 0 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=ed229447071d859827c8a7f455f0d54e08a51c1ccf1fbc1d 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=0 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.oyo 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.oyo 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.oyo 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha512 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=64 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=2e662722223c1afcceab317675de014256c42d90cffc90dc84bc92ce01325853 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.OXZ 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 2e662722223c1afcceab317675de014256c42d90cffc90dc84bc92ce01325853 3 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 2e662722223c1afcceab317675de014256c42d90cffc90dc84bc92ce01325853 3 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=2e662722223c1afcceab317675de014256c42d90cffc90dc84bc92ce01325853 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=3 00:14:44.940 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:14:45.199 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.OXZ 00:14:45.199 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.OXZ 00:14:45.199 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.OXZ 00:14:45.199 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:14:45.199 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:14:45.199 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:45.199 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:14:45.199 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha256 00:14:45.199 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=32 00:14:45.199 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:45.199 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=503941bea27304a1a00ba1ffda3d05a6 00:14:45.199 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:14:45.199 08:59:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.kAh 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 503941bea27304a1a00ba1ffda3d05a6 1 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 503941bea27304a1a00ba1ffda3d05a6 1 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=503941bea27304a1a00ba1ffda3d05a6 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=1 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.kAh 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.kAh 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.kAh 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha384 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=afd483f2400ba24ecaa1d85f5d5302720e9714113a3612b5 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.z3e 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key afd483f2400ba24ecaa1d85f5d5302720e9714113a3612b5 2 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 afd483f2400ba24ecaa1d85f5d5302720e9714113a3612b5 2 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=afd483f2400ba24ecaa1d85f5d5302720e9714113a3612b5 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=2 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.z3e 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.z3e 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.z3e 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha384 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=28f666bbdf47254f9bc4990385fbe8e3610d3159296d36ed 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.ybR 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 28f666bbdf47254f9bc4990385fbe8e3610d3159296d36ed 2 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 28f666bbdf47254f9bc4990385fbe8e3610d3159296d36ed 2 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=28f666bbdf47254f9bc4990385fbe8e3610d3159296d36ed 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=2 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.ybR 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.ybR 00:14:45.199 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.ybR 00:14:45.200 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:14:45.200 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:14:45.200 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:45.200 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:14:45.200 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha256 00:14:45.200 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=32 00:14:45.200 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:14:45.200 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=6410b1b5dca3d78f4ccaa7043e932756 00:14:45.200 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:14:45.200 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.l9v 00:14:45.200 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 6410b1b5dca3d78f4ccaa7043e932756 1 00:14:45.200 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 6410b1b5dca3d78f4ccaa7043e932756 1 00:14:45.200 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:14:45.200 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:14:45.200 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=6410b1b5dca3d78f4ccaa7043e932756 00:14:45.200 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=1 00:14:45.200 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:14:45.200 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.l9v 00:14:45.200 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.l9v 00:14:45.200 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.l9v 00:14:45.200 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:14:45.200 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:14:45.200 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:14:45.200 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:14:45.200 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha512 00:14:45.200 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=64 00:14:45.200 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:14:45.200 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=bedf65504bfda8ed1fd21ca69fad018148f7f3f51150e4ac75ee3c094e926d98 00:14:45.200 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:14:45.457 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.VWN 00:14:45.457 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key bedf65504bfda8ed1fd21ca69fad018148f7f3f51150e4ac75ee3c094e926d98 3 00:14:45.457 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 bedf65504bfda8ed1fd21ca69fad018148f7f3f51150e4ac75ee3c094e926d98 3 00:14:45.457 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:14:45.457 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:14:45.457 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=bedf65504bfda8ed1fd21ca69fad018148f7f3f51150e4ac75ee3c094e926d98 00:14:45.457 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=3 00:14:45.457 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:14:45.457 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.VWN 00:14:45.457 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.VWN 00:14:45.457 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.VWN 00:14:45.458 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:14:45.458 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 2315021 00:14:45.458 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2315021 ']' 00:14:45.458 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:45.458 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:45.458 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:45.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:45.458 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:45.458 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.458 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:45.458 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:45.458 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 2315046 /var/tmp/host.sock 00:14:45.458 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2315046 ']' 00:14:45.458 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:14:45.458 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:45.458 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:45.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:45.458 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:45.458 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.715 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:45.715 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:14:45.715 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:14:45.715 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.715 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.715 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.715 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:45.715 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.oyo 00:14:45.715 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.715 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.715 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.716 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.oyo 00:14:45.716 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.oyo 00:14:45.973 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.OXZ ]] 00:14:45.973 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.OXZ 00:14:45.973 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:45.973 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.973 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:45.973 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.OXZ 00:14:45.973 08:59:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.OXZ 00:14:46.231 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:46.231 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.kAh 00:14:46.231 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.231 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.231 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.231 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.kAh 00:14:46.231 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.kAh 00:14:46.488 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.z3e ]] 00:14:46.488 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.z3e 00:14:46.488 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.488 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.488 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.488 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.z3e 00:14:46.488 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.z3e 00:14:46.746 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:46.746 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ybR 00:14:46.746 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.746 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.746 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.746 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.ybR 00:14:46.746 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.ybR 00:14:46.746 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.l9v ]] 00:14:46.746 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.l9v 00:14:46.746 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:46.746 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.746 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:46.746 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.l9v 00:14:46.746 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.l9v 00:14:47.004 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:14:47.004 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.VWN 00:14:47.004 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.004 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.004 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.004 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.VWN 00:14:47.004 08:59:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.VWN 00:14:47.262 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:14:47.262 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:14:47.262 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:47.262 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:47.262 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:47.262 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:47.520 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:14:47.520 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:47.520 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:47.520 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:47.520 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:47.520 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:47.520 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.520 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.520 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.520 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.520 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.520 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.520 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:47.778 00:14:47.778 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:47.778 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:47.778 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:47.778 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:47.778 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:47.778 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:47.778 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:47.778 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:47.778 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:47.778 { 00:14:47.778 "cntlid": 1, 00:14:47.778 "qid": 0, 00:14:47.778 "state": "enabled", 00:14:47.778 "thread": "nvmf_tgt_poll_group_000", 00:14:47.778 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:47.778 "listen_address": { 00:14:47.778 "trtype": "TCP", 00:14:47.778 "adrfam": "IPv4", 00:14:47.778 "traddr": "10.0.0.2", 00:14:47.778 "trsvcid": "4420" 00:14:47.778 }, 00:14:47.778 "peer_address": { 00:14:47.778 "trtype": "TCP", 00:14:47.778 "adrfam": "IPv4", 00:14:47.778 "traddr": "10.0.0.1", 00:14:47.778 "trsvcid": "42396" 00:14:47.778 }, 00:14:47.778 "auth": { 00:14:47.778 "state": "completed", 00:14:47.778 "digest": "sha256", 00:14:47.778 "dhgroup": "null" 00:14:47.778 } 00:14:47.778 } 00:14:47.778 ]' 00:14:47.778 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:48.036 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:48.036 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:48.036 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:48.036 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:48.036 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.036 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.036 08:59:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.294 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:14:48.294 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:14:48.859 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:48.859 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:48.859 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:48.859 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:48.859 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.859 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:48.859 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:48.859 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:48.859 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:49.117 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:14:49.117 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:49.117 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:49.117 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:49.117 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:49.117 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.117 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.117 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.117 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.117 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.117 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.117 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.117 08:59:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:49.376 00:14:49.376 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:49.376 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:49.376 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:49.376 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:49.376 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:49.376 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:49.376 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.376 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:49.376 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:49.376 { 00:14:49.376 "cntlid": 3, 00:14:49.376 "qid": 0, 00:14:49.376 "state": "enabled", 00:14:49.376 "thread": "nvmf_tgt_poll_group_000", 00:14:49.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:49.376 "listen_address": { 00:14:49.376 "trtype": "TCP", 00:14:49.376 "adrfam": "IPv4", 00:14:49.376 "traddr": "10.0.0.2", 00:14:49.376 "trsvcid": "4420" 00:14:49.376 }, 00:14:49.376 "peer_address": { 00:14:49.376 "trtype": "TCP", 00:14:49.376 "adrfam": "IPv4", 00:14:49.376 "traddr": "10.0.0.1", 00:14:49.376 "trsvcid": "57692" 00:14:49.376 }, 00:14:49.376 "auth": { 00:14:49.376 "state": "completed", 00:14:49.376 "digest": "sha256", 00:14:49.376 "dhgroup": "null" 00:14:49.376 } 00:14:49.376 } 00:14:49.376 ]' 00:14:49.376 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:49.634 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:49.634 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:49.634 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:49.634 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:49.634 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:49.634 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:49.634 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:49.892 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:14:49.892 08:59:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:14:50.458 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:50.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:50.458 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:50.458 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.458 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.459 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.459 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:50.459 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:50.459 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:50.459 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:14:50.459 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:50.459 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:50.459 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:50.459 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:50.459 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:50.459 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.459 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.459 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.459 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.459 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.459 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.459 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.716 00:14:50.716 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:50.716 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:50.716 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.974 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.975 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.975 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:50.975 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.975 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:50.975 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:50.975 { 00:14:50.975 "cntlid": 5, 00:14:50.975 "qid": 0, 00:14:50.975 "state": "enabled", 00:14:50.975 "thread": "nvmf_tgt_poll_group_000", 00:14:50.975 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:50.975 "listen_address": { 00:14:50.975 "trtype": "TCP", 00:14:50.975 "adrfam": "IPv4", 00:14:50.975 "traddr": "10.0.0.2", 00:14:50.975 "trsvcid": "4420" 00:14:50.975 }, 00:14:50.975 "peer_address": { 00:14:50.975 "trtype": "TCP", 00:14:50.975 "adrfam": "IPv4", 00:14:50.975 "traddr": "10.0.0.1", 00:14:50.975 "trsvcid": "57708" 00:14:50.975 }, 00:14:50.975 "auth": { 00:14:50.975 "state": "completed", 00:14:50.975 "digest": "sha256", 00:14:50.975 "dhgroup": "null" 00:14:50.975 } 00:14:50.975 } 00:14:50.975 ]' 00:14:50.975 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:50.975 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:50.975 08:59:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:51.233 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:51.233 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:51.233 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:51.233 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:51.233 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.233 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:14:51.233 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:14:51.798 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:51.798 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:51.798 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:51.798 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:51.798 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.056 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.056 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:52.056 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:52.056 08:59:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:52.056 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:14:52.056 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:52.056 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:52.056 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:14:52.056 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:52.056 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.056 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:14:52.056 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.056 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.056 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.056 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:52.056 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:52.056 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:52.314 00:14:52.314 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:52.314 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:52.314 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:52.572 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:52.572 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:52.572 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:52.572 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.572 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:52.572 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:52.572 { 00:14:52.572 "cntlid": 7, 00:14:52.572 "qid": 0, 00:14:52.572 "state": "enabled", 00:14:52.572 "thread": "nvmf_tgt_poll_group_000", 00:14:52.572 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:52.572 "listen_address": { 00:14:52.572 "trtype": "TCP", 00:14:52.572 "adrfam": "IPv4", 00:14:52.572 "traddr": "10.0.0.2", 00:14:52.572 "trsvcid": "4420" 00:14:52.572 }, 00:14:52.572 "peer_address": { 00:14:52.572 "trtype": "TCP", 00:14:52.572 "adrfam": "IPv4", 00:14:52.572 "traddr": "10.0.0.1", 00:14:52.572 "trsvcid": "57722" 00:14:52.572 }, 00:14:52.572 "auth": { 00:14:52.572 "state": "completed", 00:14:52.572 "digest": "sha256", 00:14:52.572 "dhgroup": "null" 00:14:52.572 } 00:14:52.572 } 00:14:52.572 ]' 00:14:52.572 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:52.572 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:52.572 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:52.572 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:14:52.572 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:52.830 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:52.830 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:52.830 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:52.830 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:14:52.830 08:59:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:14:53.396 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:53.396 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:53.396 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:53.396 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.396 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.396 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.396 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:53.396 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:53.396 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:53.396 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:53.655 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:14:53.655 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:53.655 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:53.655 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:53.655 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:14:53.655 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:53.655 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.655 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:53.655 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.655 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:53.655 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.655 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.655 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:53.914 00:14:53.914 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:53.914 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:53.914 08:59:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:54.172 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:54.172 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:54.172 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.172 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.172 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.172 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:54.172 { 00:14:54.172 "cntlid": 9, 00:14:54.172 "qid": 0, 00:14:54.172 "state": "enabled", 00:14:54.172 "thread": "nvmf_tgt_poll_group_000", 00:14:54.172 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:54.172 "listen_address": { 00:14:54.172 "trtype": "TCP", 00:14:54.172 "adrfam": "IPv4", 00:14:54.172 "traddr": "10.0.0.2", 00:14:54.172 "trsvcid": "4420" 00:14:54.172 }, 00:14:54.172 "peer_address": { 00:14:54.172 "trtype": "TCP", 00:14:54.172 "adrfam": "IPv4", 00:14:54.172 "traddr": "10.0.0.1", 00:14:54.172 "trsvcid": "57748" 00:14:54.172 }, 00:14:54.172 "auth": { 00:14:54.172 "state": "completed", 00:14:54.172 "digest": "sha256", 00:14:54.172 "dhgroup": "ffdhe2048" 00:14:54.172 } 00:14:54.172 } 00:14:54.172 ]' 00:14:54.172 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:54.172 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:54.172 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:54.172 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:54.172 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:54.430 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:54.430 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:54.430 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:54.430 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:14:54.430 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:14:54.996 08:59:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.996 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:54.996 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:54.996 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.996 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:54.996 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:54.996 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:54.996 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:55.255 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:14:55.255 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:55.255 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:55.255 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:55.255 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:14:55.255 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.255 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.255 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.255 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.255 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.255 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.255 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.255 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:55.513 00:14:55.513 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:55.513 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:55.513 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.769 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.769 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.769 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:55.769 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.769 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:55.769 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:55.769 { 00:14:55.769 "cntlid": 11, 00:14:55.769 "qid": 0, 00:14:55.769 "state": "enabled", 00:14:55.769 "thread": "nvmf_tgt_poll_group_000", 00:14:55.769 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:55.769 "listen_address": { 00:14:55.769 "trtype": "TCP", 00:14:55.769 "adrfam": "IPv4", 00:14:55.769 "traddr": "10.0.0.2", 00:14:55.769 "trsvcid": "4420" 00:14:55.769 }, 00:14:55.769 "peer_address": { 00:14:55.769 "trtype": "TCP", 00:14:55.769 "adrfam": "IPv4", 00:14:55.769 "traddr": "10.0.0.1", 00:14:55.769 "trsvcid": "57756" 00:14:55.769 }, 00:14:55.769 "auth": { 00:14:55.769 "state": "completed", 00:14:55.769 "digest": "sha256", 00:14:55.769 "dhgroup": "ffdhe2048" 00:14:55.769 } 00:14:55.769 } 00:14:55.769 ]' 00:14:55.769 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:55.770 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:55.770 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:55.770 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:55.770 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:56.027 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:56.027 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:56.027 08:59:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.027 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:14:56.027 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:14:56.592 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:56.592 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:56.592 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:56.592 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.592 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.592 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.592 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:56.592 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:56.592 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:56.849 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:14:56.849 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:56.850 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:56.850 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:56.850 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:14:56.850 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:56.850 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.850 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:56.850 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:56.850 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:56.850 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.850 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:56.850 08:59:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:57.107 00:14:57.107 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:57.107 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.107 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:57.364 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.364 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.365 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:57.365 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.365 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:57.365 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:57.365 { 00:14:57.365 "cntlid": 13, 00:14:57.365 "qid": 0, 00:14:57.365 "state": "enabled", 00:14:57.365 "thread": "nvmf_tgt_poll_group_000", 00:14:57.365 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:57.365 "listen_address": { 00:14:57.365 "trtype": "TCP", 00:14:57.365 "adrfam": "IPv4", 00:14:57.365 "traddr": "10.0.0.2", 00:14:57.365 "trsvcid": "4420" 00:14:57.365 }, 00:14:57.365 "peer_address": { 00:14:57.365 "trtype": "TCP", 00:14:57.365 "adrfam": "IPv4", 00:14:57.365 "traddr": "10.0.0.1", 00:14:57.365 "trsvcid": "57786" 00:14:57.365 }, 00:14:57.365 "auth": { 00:14:57.365 "state": "completed", 00:14:57.365 "digest": "sha256", 00:14:57.365 "dhgroup": "ffdhe2048" 00:14:57.365 } 00:14:57.365 } 00:14:57.365 ]' 00:14:57.365 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:57.365 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:57.365 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:57.365 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:57.365 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:57.365 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:57.365 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:57.365 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:57.622 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:14:57.622 08:59:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:14:58.186 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:58.186 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:58.186 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:58.186 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.186 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.186 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.186 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:58.186 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:58.186 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:58.444 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:14:58.444 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:14:58.444 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:14:58.444 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:14:58.444 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:14:58.444 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:58.444 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:14:58.444 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.444 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.444 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.444 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:14:58.444 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:58.444 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:14:58.702 00:14:58.702 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:14:58.702 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:14:58.702 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:58.960 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:58.960 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:58.960 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.960 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:58.960 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.960 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:14:58.960 { 00:14:58.960 "cntlid": 15, 00:14:58.960 "qid": 0, 00:14:58.960 "state": "enabled", 00:14:58.960 "thread": "nvmf_tgt_poll_group_000", 00:14:58.960 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:14:58.960 "listen_address": { 00:14:58.960 "trtype": "TCP", 00:14:58.960 "adrfam": "IPv4", 00:14:58.960 "traddr": "10.0.0.2", 00:14:58.960 "trsvcid": "4420" 00:14:58.960 }, 00:14:58.960 "peer_address": { 00:14:58.960 "trtype": "TCP", 00:14:58.960 "adrfam": "IPv4", 00:14:58.960 "traddr": "10.0.0.1", 00:14:58.960 "trsvcid": "43320" 00:14:58.960 }, 00:14:58.960 "auth": { 00:14:58.960 "state": "completed", 00:14:58.960 "digest": "sha256", 00:14:58.960 "dhgroup": "ffdhe2048" 00:14:58.960 } 00:14:58.960 } 00:14:58.960 ]' 00:14:58.960 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:14:58.960 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:58.960 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:14:58.960 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:58.960 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:14:58.960 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.960 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.960 08:59:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:59.218 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:14:59.218 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:14:59.784 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.784 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.784 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:59.784 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:59.784 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.784 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:59.784 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:14:59.784 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:14:59.784 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:59.784 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:00.042 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:00.042 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:00.042 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:00.042 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:00.042 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:00.042 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:00.042 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.042 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.042 08:59:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.042 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.042 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.042 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.042 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:00.300 00:15:00.300 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:00.300 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.300 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:00.557 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.557 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.557 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:00.557 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.557 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:00.557 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:00.557 { 00:15:00.557 "cntlid": 17, 00:15:00.557 "qid": 0, 00:15:00.557 "state": "enabled", 00:15:00.557 "thread": "nvmf_tgt_poll_group_000", 00:15:00.557 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:00.557 "listen_address": { 00:15:00.557 "trtype": "TCP", 00:15:00.557 "adrfam": "IPv4", 00:15:00.557 "traddr": "10.0.0.2", 00:15:00.557 "trsvcid": "4420" 00:15:00.557 }, 00:15:00.557 "peer_address": { 00:15:00.557 "trtype": "TCP", 00:15:00.557 "adrfam": "IPv4", 00:15:00.557 "traddr": "10.0.0.1", 00:15:00.557 "trsvcid": "43328" 00:15:00.557 }, 00:15:00.557 "auth": { 00:15:00.557 "state": "completed", 00:15:00.557 "digest": "sha256", 00:15:00.557 "dhgroup": "ffdhe3072" 00:15:00.557 } 00:15:00.557 } 00:15:00.557 ]' 00:15:00.557 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:00.557 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:00.557 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:00.557 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:00.557 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:00.814 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.814 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.814 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.814 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:15:00.814 08:59:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:15:01.380 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.380 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.380 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:01.380 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.380 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.380 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.380 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:01.380 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:01.380 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:01.637 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:01.637 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:01.637 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:01.637 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:01.637 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:01.637 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.637 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.637 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:01.637 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.637 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:01.637 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.637 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.637 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:01.894 00:15:01.894 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:01.894 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:01.894 08:59:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.152 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.152 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.152 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:02.152 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.152 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:02.152 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:02.152 { 00:15:02.152 "cntlid": 19, 00:15:02.152 "qid": 0, 00:15:02.152 "state": "enabled", 00:15:02.152 "thread": "nvmf_tgt_poll_group_000", 00:15:02.152 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:02.152 "listen_address": { 00:15:02.152 "trtype": "TCP", 00:15:02.152 "adrfam": "IPv4", 00:15:02.152 "traddr": "10.0.0.2", 00:15:02.152 "trsvcid": "4420" 00:15:02.152 }, 00:15:02.152 "peer_address": { 00:15:02.152 "trtype": "TCP", 00:15:02.152 "adrfam": "IPv4", 00:15:02.152 "traddr": "10.0.0.1", 00:15:02.152 "trsvcid": "43362" 00:15:02.152 }, 00:15:02.152 "auth": { 00:15:02.152 "state": "completed", 00:15:02.152 "digest": "sha256", 00:15:02.152 "dhgroup": "ffdhe3072" 00:15:02.152 } 00:15:02.152 } 00:15:02.152 ]' 00:15:02.152 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:02.152 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:02.152 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:02.410 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:02.410 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:02.410 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.410 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.410 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.410 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:15:02.410 08:59:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:15:02.976 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.233 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.234 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:03.234 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.234 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.234 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.234 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:03.234 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:03.234 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:03.234 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:03.234 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:03.234 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:03.234 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:03.234 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:03.234 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:03.234 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.234 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.234 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.234 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.234 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.234 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.234 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:03.491 00:15:03.491 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:03.748 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:03.748 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:03.748 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:03.748 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:03.748 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:03.748 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.748 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:03.748 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:03.748 { 00:15:03.748 "cntlid": 21, 00:15:03.748 "qid": 0, 00:15:03.748 "state": "enabled", 00:15:03.748 "thread": "nvmf_tgt_poll_group_000", 00:15:03.748 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:03.748 "listen_address": { 00:15:03.748 "trtype": "TCP", 00:15:03.748 "adrfam": "IPv4", 00:15:03.748 "traddr": "10.0.0.2", 00:15:03.748 "trsvcid": "4420" 00:15:03.748 }, 00:15:03.748 "peer_address": { 00:15:03.748 "trtype": "TCP", 00:15:03.748 "adrfam": "IPv4", 00:15:03.748 "traddr": "10.0.0.1", 00:15:03.748 "trsvcid": "43384" 00:15:03.748 }, 00:15:03.748 "auth": { 00:15:03.749 "state": "completed", 00:15:03.749 "digest": "sha256", 00:15:03.749 "dhgroup": "ffdhe3072" 00:15:03.749 } 00:15:03.749 } 00:15:03.749 ]' 00:15:03.749 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:03.749 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:03.749 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:04.007 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:04.007 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:04.007 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.007 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.007 08:59:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:04.266 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:15:04.266 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:15:04.834 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:04.835 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:04.835 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:04.835 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.835 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.835 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.835 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:04.835 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:04.835 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:04.835 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:15:04.835 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:04.835 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:04.835 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:04.835 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:04.835 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.835 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:04.835 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.835 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.835 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.835 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:04.835 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:04.835 08:59:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:05.094 00:15:05.353 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:05.353 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:05.353 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:05.353 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:05.353 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:05.353 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:05.353 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:05.353 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:05.353 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:05.353 { 00:15:05.353 "cntlid": 23, 00:15:05.353 "qid": 0, 00:15:05.353 "state": "enabled", 00:15:05.353 "thread": "nvmf_tgt_poll_group_000", 00:15:05.353 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:05.353 "listen_address": { 00:15:05.353 "trtype": "TCP", 00:15:05.353 "adrfam": "IPv4", 00:15:05.353 "traddr": "10.0.0.2", 00:15:05.353 "trsvcid": "4420" 00:15:05.353 }, 00:15:05.353 "peer_address": { 00:15:05.353 "trtype": "TCP", 00:15:05.353 "adrfam": "IPv4", 00:15:05.353 "traddr": "10.0.0.1", 00:15:05.353 "trsvcid": "43420" 00:15:05.353 }, 00:15:05.353 "auth": { 00:15:05.353 "state": "completed", 00:15:05.353 "digest": "sha256", 00:15:05.353 "dhgroup": "ffdhe3072" 00:15:05.353 } 00:15:05.353 } 00:15:05.353 ]' 00:15:05.353 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:05.612 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:05.612 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:05.612 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:05.612 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:05.612 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:05.612 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:05.612 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.870 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:15:05.871 08:59:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:15:06.438 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.438 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.438 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:06.438 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.438 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.438 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.438 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:06.438 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:06.438 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:06.438 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:06.438 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:15:06.438 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:06.438 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:06.438 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:06.438 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:06.438 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.438 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.438 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:06.438 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.438 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:06.438 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.438 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.438 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:06.735 00:15:06.735 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:06.735 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:06.735 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:07.028 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:07.028 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:07.028 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.028 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.028 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.028 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:07.028 { 00:15:07.028 "cntlid": 25, 00:15:07.028 "qid": 0, 00:15:07.028 "state": "enabled", 00:15:07.028 "thread": "nvmf_tgt_poll_group_000", 00:15:07.028 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:07.028 "listen_address": { 00:15:07.028 "trtype": "TCP", 00:15:07.028 "adrfam": "IPv4", 00:15:07.028 "traddr": "10.0.0.2", 00:15:07.028 "trsvcid": "4420" 00:15:07.028 }, 00:15:07.028 "peer_address": { 00:15:07.028 "trtype": "TCP", 00:15:07.028 "adrfam": "IPv4", 00:15:07.028 "traddr": "10.0.0.1", 00:15:07.028 "trsvcid": "43442" 00:15:07.028 }, 00:15:07.028 "auth": { 00:15:07.028 "state": "completed", 00:15:07.028 "digest": "sha256", 00:15:07.028 "dhgroup": "ffdhe4096" 00:15:07.029 } 00:15:07.029 } 00:15:07.029 ]' 00:15:07.029 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:07.029 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:07.029 08:59:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:07.029 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:07.029 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:07.318 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.318 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.318 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.318 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:15:07.318 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:15:07.906 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:07.906 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:07.906 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:07.906 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:07.906 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.906 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:07.906 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:07.906 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:07.906 08:59:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:08.166 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:15:08.166 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:08.166 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:08.166 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:08.166 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:08.166 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.166 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.166 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.166 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.166 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.166 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.166 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.166 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:08.425 00:15:08.425 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:08.425 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:08.425 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:08.684 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:08.684 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:08.684 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:08.684 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.684 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:08.684 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:08.684 { 00:15:08.684 "cntlid": 27, 00:15:08.684 "qid": 0, 00:15:08.684 "state": "enabled", 00:15:08.684 "thread": "nvmf_tgt_poll_group_000", 00:15:08.684 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:08.684 "listen_address": { 00:15:08.684 "trtype": "TCP", 00:15:08.684 "adrfam": "IPv4", 00:15:08.684 "traddr": "10.0.0.2", 00:15:08.684 "trsvcid": "4420" 00:15:08.684 }, 00:15:08.684 "peer_address": { 00:15:08.684 "trtype": "TCP", 00:15:08.684 "adrfam": "IPv4", 00:15:08.684 "traddr": "10.0.0.1", 00:15:08.684 "trsvcid": "48614" 00:15:08.684 }, 00:15:08.684 "auth": { 00:15:08.684 "state": "completed", 00:15:08.684 "digest": "sha256", 00:15:08.684 "dhgroup": "ffdhe4096" 00:15:08.684 } 00:15:08.684 } 00:15:08.684 ]' 00:15:08.684 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:08.684 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:08.684 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:08.684 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:08.684 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:08.684 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:08.684 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:08.684 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:08.943 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:15:08.943 08:59:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:15:09.511 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:09.511 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:09.511 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:09.511 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.511 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.511 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.511 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:09.511 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:09.511 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:09.770 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:15:09.770 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:09.770 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:09.770 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:09.770 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:09.770 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:09.770 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.770 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:09.770 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.770 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:09.770 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.770 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:09.770 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:10.028 00:15:10.028 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:10.028 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:10.028 08:59:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:10.286 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:10.286 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:10.286 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:10.286 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.286 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:10.286 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:10.286 { 00:15:10.286 "cntlid": 29, 00:15:10.286 "qid": 0, 00:15:10.286 "state": "enabled", 00:15:10.286 "thread": "nvmf_tgt_poll_group_000", 00:15:10.286 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:10.286 "listen_address": { 00:15:10.286 "trtype": "TCP", 00:15:10.286 "adrfam": "IPv4", 00:15:10.286 "traddr": "10.0.0.2", 00:15:10.286 "trsvcid": "4420" 00:15:10.286 }, 00:15:10.286 "peer_address": { 00:15:10.286 "trtype": "TCP", 00:15:10.286 "adrfam": "IPv4", 00:15:10.286 "traddr": "10.0.0.1", 00:15:10.286 "trsvcid": "48640" 00:15:10.286 }, 00:15:10.286 "auth": { 00:15:10.286 "state": "completed", 00:15:10.286 "digest": "sha256", 00:15:10.286 "dhgroup": "ffdhe4096" 00:15:10.286 } 00:15:10.286 } 00:15:10.286 ]' 00:15:10.286 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:10.286 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:10.286 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:10.286 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:10.286 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:10.545 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:10.545 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:10.545 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:10.545 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:15:10.545 08:59:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:15:11.112 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:11.112 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:11.112 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:11.112 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.112 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.112 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.112 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:11.112 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:11.112 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:15:11.370 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:15:11.370 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:11.370 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:11.370 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:11.370 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:11.370 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:11.370 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:11.370 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.370 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.370 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.370 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:11.371 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:11.371 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:11.629 00:15:11.629 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:11.629 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:11.629 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.888 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.888 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.888 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:11.888 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.888 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:11.888 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:11.888 { 00:15:11.888 "cntlid": 31, 00:15:11.888 "qid": 0, 00:15:11.888 "state": "enabled", 00:15:11.888 "thread": "nvmf_tgt_poll_group_000", 00:15:11.888 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:11.888 "listen_address": { 00:15:11.888 "trtype": "TCP", 00:15:11.888 "adrfam": "IPv4", 00:15:11.888 "traddr": "10.0.0.2", 00:15:11.888 "trsvcid": "4420" 00:15:11.888 }, 00:15:11.888 "peer_address": { 00:15:11.888 "trtype": "TCP", 00:15:11.888 "adrfam": "IPv4", 00:15:11.888 "traddr": "10.0.0.1", 00:15:11.888 "trsvcid": "48670" 00:15:11.888 }, 00:15:11.888 "auth": { 00:15:11.888 "state": "completed", 00:15:11.888 "digest": "sha256", 00:15:11.888 "dhgroup": "ffdhe4096" 00:15:11.888 } 00:15:11.888 } 00:15:11.888 ]' 00:15:11.888 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:11.888 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:11.888 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:12.147 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:12.147 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:12.147 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:12.147 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:12.147 08:59:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:12.406 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:15:12.406 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:15:12.981 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.981 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:12.981 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.981 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.981 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.981 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:12.981 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:12.981 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:12.981 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:12.981 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:15:12.981 08:59:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:12.981 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:12.981 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:12.981 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:12.981 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:12.981 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.981 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:12.981 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.981 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:12.981 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.981 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:12.981 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.552 00:15:13.552 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:13.552 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.552 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:13.552 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.552 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.552 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:13.552 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.552 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:13.552 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:13.552 { 00:15:13.552 "cntlid": 33, 00:15:13.552 "qid": 0, 00:15:13.552 "state": "enabled", 00:15:13.552 "thread": "nvmf_tgt_poll_group_000", 00:15:13.552 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:13.552 "listen_address": { 00:15:13.552 "trtype": "TCP", 00:15:13.552 "adrfam": "IPv4", 00:15:13.552 "traddr": "10.0.0.2", 00:15:13.552 "trsvcid": "4420" 00:15:13.552 }, 00:15:13.552 "peer_address": { 00:15:13.552 "trtype": "TCP", 00:15:13.552 "adrfam": "IPv4", 00:15:13.552 "traddr": "10.0.0.1", 00:15:13.552 "trsvcid": "48700" 00:15:13.552 }, 00:15:13.552 "auth": { 00:15:13.552 "state": "completed", 00:15:13.552 "digest": "sha256", 00:15:13.552 "dhgroup": "ffdhe6144" 00:15:13.552 } 00:15:13.552 } 00:15:13.552 ]' 00:15:13.552 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:13.809 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:13.810 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:13.810 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:13.810 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:13.810 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.810 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.810 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.068 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:15:14.068 08:59:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:15:14.634 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.634 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.634 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:14.634 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.634 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.634 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.634 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:14.634 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:14.634 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:14.634 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:15:14.634 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:14.634 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:14.634 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:14.634 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:14.634 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:14.635 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.635 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:14.635 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.893 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:14.893 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.893 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:14.893 08:59:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.151 00:15:15.151 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:15.151 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:15.151 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:15.409 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:15.409 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:15.409 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:15.409 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.409 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:15.409 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:15.409 { 00:15:15.409 "cntlid": 35, 00:15:15.409 "qid": 0, 00:15:15.409 "state": "enabled", 00:15:15.409 "thread": "nvmf_tgt_poll_group_000", 00:15:15.409 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:15.409 "listen_address": { 00:15:15.409 "trtype": "TCP", 00:15:15.409 "adrfam": "IPv4", 00:15:15.409 "traddr": "10.0.0.2", 00:15:15.409 "trsvcid": "4420" 00:15:15.409 }, 00:15:15.409 "peer_address": { 00:15:15.409 "trtype": "TCP", 00:15:15.409 "adrfam": "IPv4", 00:15:15.409 "traddr": "10.0.0.1", 00:15:15.409 "trsvcid": "48734" 00:15:15.409 }, 00:15:15.409 "auth": { 00:15:15.409 "state": "completed", 00:15:15.409 "digest": "sha256", 00:15:15.409 "dhgroup": "ffdhe6144" 00:15:15.409 } 00:15:15.409 } 00:15:15.409 ]' 00:15:15.409 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:15.409 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:15.409 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:15.409 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:15.410 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:15.410 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:15.410 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:15.410 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:15.668 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:15:15.668 08:59:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:15:16.235 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:16.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:16.235 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:16.235 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.235 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.235 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.235 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:16.235 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:16.235 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:16.494 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:15:16.494 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:16.494 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:16.494 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:16.494 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:16.494 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:16.494 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.494 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:16.494 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.494 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:16.494 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.494 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.494 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:16.752 00:15:16.752 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:16.752 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:16.752 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:17.011 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:17.011 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:17.011 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.011 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.011 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:17.011 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:17.011 { 00:15:17.011 "cntlid": 37, 00:15:17.011 "qid": 0, 00:15:17.011 "state": "enabled", 00:15:17.011 "thread": "nvmf_tgt_poll_group_000", 00:15:17.011 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:17.011 "listen_address": { 00:15:17.011 "trtype": "TCP", 00:15:17.011 "adrfam": "IPv4", 00:15:17.011 "traddr": "10.0.0.2", 00:15:17.011 "trsvcid": "4420" 00:15:17.011 }, 00:15:17.011 "peer_address": { 00:15:17.011 "trtype": "TCP", 00:15:17.011 "adrfam": "IPv4", 00:15:17.011 "traddr": "10.0.0.1", 00:15:17.011 "trsvcid": "48762" 00:15:17.011 }, 00:15:17.011 "auth": { 00:15:17.011 "state": "completed", 00:15:17.011 "digest": "sha256", 00:15:17.011 "dhgroup": "ffdhe6144" 00:15:17.011 } 00:15:17.011 } 00:15:17.011 ]' 00:15:17.011 08:59:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:17.011 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:17.011 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:17.270 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:17.270 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:17.271 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:17.271 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:17.271 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:17.271 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:15:17.271 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:15:17.837 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.837 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.837 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:17.837 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:17.837 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.096 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.096 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:18.096 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:18.096 08:59:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:15:18.096 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:15:18.096 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:18.096 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:18.096 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:18.096 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:18.096 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:18.096 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:18.096 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.096 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.096 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.096 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:18.096 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:18.096 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:18.663 00:15:18.663 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:18.663 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.663 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:18.663 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.663 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.663 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:18.663 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.663 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:18.663 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:18.663 { 00:15:18.663 "cntlid": 39, 00:15:18.663 "qid": 0, 00:15:18.663 "state": "enabled", 00:15:18.663 "thread": "nvmf_tgt_poll_group_000", 00:15:18.663 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:18.663 "listen_address": { 00:15:18.663 "trtype": "TCP", 00:15:18.663 "adrfam": "IPv4", 00:15:18.663 "traddr": "10.0.0.2", 00:15:18.663 "trsvcid": "4420" 00:15:18.663 }, 00:15:18.663 "peer_address": { 00:15:18.663 "trtype": "TCP", 00:15:18.663 "adrfam": "IPv4", 00:15:18.663 "traddr": "10.0.0.1", 00:15:18.663 "trsvcid": "43962" 00:15:18.663 }, 00:15:18.663 "auth": { 00:15:18.663 "state": "completed", 00:15:18.663 "digest": "sha256", 00:15:18.663 "dhgroup": "ffdhe6144" 00:15:18.663 } 00:15:18.663 } 00:15:18.663 ]' 00:15:18.663 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:18.663 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:18.663 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:18.921 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:18.921 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:18.921 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.921 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.921 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.921 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:15:18.922 08:59:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:15:19.488 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.488 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.488 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:19.488 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.488 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.747 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.747 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:19.747 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:19.747 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:19.747 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:19.747 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:15:19.747 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:19.747 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:19.747 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:19.747 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:19.747 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.747 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.747 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:19.747 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.747 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:19.747 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.747 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:19.747 08:59:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:20.314 00:15:20.314 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:20.314 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:20.314 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.573 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.573 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.573 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:20.573 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.573 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:20.573 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:20.573 { 00:15:20.573 "cntlid": 41, 00:15:20.573 "qid": 0, 00:15:20.573 "state": "enabled", 00:15:20.573 "thread": "nvmf_tgt_poll_group_000", 00:15:20.573 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:20.573 "listen_address": { 00:15:20.573 "trtype": "TCP", 00:15:20.573 "adrfam": "IPv4", 00:15:20.573 "traddr": "10.0.0.2", 00:15:20.573 "trsvcid": "4420" 00:15:20.573 }, 00:15:20.573 "peer_address": { 00:15:20.573 "trtype": "TCP", 00:15:20.573 "adrfam": "IPv4", 00:15:20.573 "traddr": "10.0.0.1", 00:15:20.573 "trsvcid": "43990" 00:15:20.573 }, 00:15:20.573 "auth": { 00:15:20.573 "state": "completed", 00:15:20.573 "digest": "sha256", 00:15:20.573 "dhgroup": "ffdhe8192" 00:15:20.573 } 00:15:20.573 } 00:15:20.573 ]' 00:15:20.573 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:20.573 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:20.573 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:20.573 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:20.573 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:20.573 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.573 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.573 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.832 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:15:20.832 08:59:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:15:21.398 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.398 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.398 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:21.398 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.398 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.398 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.398 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:21.398 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:21.398 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:21.657 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:15:21.657 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:21.657 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:21.657 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:21.657 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:21.657 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:21.657 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.657 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:21.657 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.657 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:21.657 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.657 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:21.657 08:59:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:22.225 00:15:22.225 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:22.225 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:22.225 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.225 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.225 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.225 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:22.225 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.225 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:22.225 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:22.225 { 00:15:22.225 "cntlid": 43, 00:15:22.225 "qid": 0, 00:15:22.225 "state": "enabled", 00:15:22.225 "thread": "nvmf_tgt_poll_group_000", 00:15:22.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:22.225 "listen_address": { 00:15:22.225 "trtype": "TCP", 00:15:22.225 "adrfam": "IPv4", 00:15:22.225 "traddr": "10.0.0.2", 00:15:22.225 "trsvcid": "4420" 00:15:22.225 }, 00:15:22.225 "peer_address": { 00:15:22.225 "trtype": "TCP", 00:15:22.225 "adrfam": "IPv4", 00:15:22.225 "traddr": "10.0.0.1", 00:15:22.225 "trsvcid": "44022" 00:15:22.225 }, 00:15:22.225 "auth": { 00:15:22.225 "state": "completed", 00:15:22.225 "digest": "sha256", 00:15:22.225 "dhgroup": "ffdhe8192" 00:15:22.225 } 00:15:22.225 } 00:15:22.225 ]' 00:15:22.225 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:22.483 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:22.483 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:22.483 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:22.483 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:22.483 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.483 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.484 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:22.742 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:15:22.742 08:59:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:15:23.307 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:23.307 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:23.307 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:23.307 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.307 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.307 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.307 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:23.307 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:23.307 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:23.566 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:15:23.566 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:23.566 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:23.566 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:23.566 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:23.566 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:23.566 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.566 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:23.566 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:23.566 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:23.566 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.566 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:23.566 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:24.134 00:15:24.134 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:24.134 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:24.134 08:59:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:24.134 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:24.134 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:24.134 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:24.134 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.134 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:24.134 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:24.134 { 00:15:24.134 "cntlid": 45, 00:15:24.134 "qid": 0, 00:15:24.134 "state": "enabled", 00:15:24.134 "thread": "nvmf_tgt_poll_group_000", 00:15:24.134 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:24.134 "listen_address": { 00:15:24.134 "trtype": "TCP", 00:15:24.134 "adrfam": "IPv4", 00:15:24.134 "traddr": "10.0.0.2", 00:15:24.134 "trsvcid": "4420" 00:15:24.134 }, 00:15:24.134 "peer_address": { 00:15:24.134 "trtype": "TCP", 00:15:24.134 "adrfam": "IPv4", 00:15:24.134 "traddr": "10.0.0.1", 00:15:24.134 "trsvcid": "44042" 00:15:24.134 }, 00:15:24.134 "auth": { 00:15:24.134 "state": "completed", 00:15:24.134 "digest": "sha256", 00:15:24.134 "dhgroup": "ffdhe8192" 00:15:24.134 } 00:15:24.134 } 00:15:24.134 ]' 00:15:24.134 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:24.134 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:24.134 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:24.393 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:24.393 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:24.393 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:24.393 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:24.393 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:24.652 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:15:24.652 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:15:25.219 08:59:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:25.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:25.219 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:25.219 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.219 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.219 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.219 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:25.219 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:25.219 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:15:25.219 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:15:25.220 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:25.220 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:25.220 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:25.220 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:25.220 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:25.220 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:25.220 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.220 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.220 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.220 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:25.220 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:25.220 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:25.785 00:15:25.785 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:25.785 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:25.785 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:26.043 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:26.043 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:26.043 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.043 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.043 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.043 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:26.043 { 00:15:26.043 "cntlid": 47, 00:15:26.043 "qid": 0, 00:15:26.043 "state": "enabled", 00:15:26.043 "thread": "nvmf_tgt_poll_group_000", 00:15:26.043 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:26.043 "listen_address": { 00:15:26.043 "trtype": "TCP", 00:15:26.043 "adrfam": "IPv4", 00:15:26.043 "traddr": "10.0.0.2", 00:15:26.043 "trsvcid": "4420" 00:15:26.043 }, 00:15:26.043 "peer_address": { 00:15:26.043 "trtype": "TCP", 00:15:26.043 "adrfam": "IPv4", 00:15:26.043 "traddr": "10.0.0.1", 00:15:26.043 "trsvcid": "44086" 00:15:26.043 }, 00:15:26.043 "auth": { 00:15:26.043 "state": "completed", 00:15:26.043 "digest": "sha256", 00:15:26.043 "dhgroup": "ffdhe8192" 00:15:26.043 } 00:15:26.043 } 00:15:26.043 ]' 00:15:26.043 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:26.043 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:26.043 08:59:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:26.043 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:26.043 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:26.043 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:26.043 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:26.043 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:26.302 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:15:26.302 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:15:26.869 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.869 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:26.869 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:26.869 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.869 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:26.869 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:26.869 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:26.869 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:26.869 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:26.869 08:59:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:27.128 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:15:27.128 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:27.128 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:27.128 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:27.128 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:27.128 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:27.128 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.128 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.128 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.128 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.128 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.128 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.128 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:27.386 00:15:27.386 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:27.386 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.386 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:27.645 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.645 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.645 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:27.645 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.645 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:27.645 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:27.645 { 00:15:27.645 "cntlid": 49, 00:15:27.645 "qid": 0, 00:15:27.645 "state": "enabled", 00:15:27.645 "thread": "nvmf_tgt_poll_group_000", 00:15:27.645 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:27.645 "listen_address": { 00:15:27.645 "trtype": "TCP", 00:15:27.645 "adrfam": "IPv4", 00:15:27.645 "traddr": "10.0.0.2", 00:15:27.645 "trsvcid": "4420" 00:15:27.645 }, 00:15:27.645 "peer_address": { 00:15:27.645 "trtype": "TCP", 00:15:27.645 "adrfam": "IPv4", 00:15:27.645 "traddr": "10.0.0.1", 00:15:27.645 "trsvcid": "44116" 00:15:27.645 }, 00:15:27.645 "auth": { 00:15:27.645 "state": "completed", 00:15:27.645 "digest": "sha384", 00:15:27.645 "dhgroup": "null" 00:15:27.645 } 00:15:27.645 } 00:15:27.645 ]' 00:15:27.645 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:27.645 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:27.645 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:27.645 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:27.645 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:27.645 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.645 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.645 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.904 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:15:27.904 08:59:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:15:28.471 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:28.471 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:28.471 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:28.471 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.471 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.471 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.471 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:28.471 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:28.472 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:28.730 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:15:28.730 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:28.730 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:28.730 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:28.730 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:28.730 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:28.730 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.730 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:28.730 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:28.730 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:28.730 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.730 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.730 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:28.989 00:15:28.989 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:28.989 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:28.989 08:59:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.247 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.247 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.247 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:29.247 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.247 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:29.247 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:29.247 { 00:15:29.247 "cntlid": 51, 00:15:29.247 "qid": 0, 00:15:29.247 "state": "enabled", 00:15:29.247 "thread": "nvmf_tgt_poll_group_000", 00:15:29.247 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:29.247 "listen_address": { 00:15:29.247 "trtype": "TCP", 00:15:29.247 "adrfam": "IPv4", 00:15:29.247 "traddr": "10.0.0.2", 00:15:29.247 "trsvcid": "4420" 00:15:29.247 }, 00:15:29.247 "peer_address": { 00:15:29.247 "trtype": "TCP", 00:15:29.247 "adrfam": "IPv4", 00:15:29.247 "traddr": "10.0.0.1", 00:15:29.247 "trsvcid": "43924" 00:15:29.247 }, 00:15:29.247 "auth": { 00:15:29.247 "state": "completed", 00:15:29.247 "digest": "sha384", 00:15:29.247 "dhgroup": "null" 00:15:29.247 } 00:15:29.247 } 00:15:29.247 ]' 00:15:29.247 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:29.248 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:29.248 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:29.248 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:29.248 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:29.248 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.248 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.248 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:29.506 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:15:29.506 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:15:30.073 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.073 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:30.073 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.073 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.073 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.073 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:30.073 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:30.073 08:59:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:30.333 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:15:30.333 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:30.333 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:30.333 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:30.333 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:30.333 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:30.333 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.333 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.333 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.333 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.333 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.333 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.333 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:30.592 00:15:30.592 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:30.592 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:30.592 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:30.851 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:30.851 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:30.851 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:30.851 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.851 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:30.851 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:30.851 { 00:15:30.851 "cntlid": 53, 00:15:30.851 "qid": 0, 00:15:30.851 "state": "enabled", 00:15:30.851 "thread": "nvmf_tgt_poll_group_000", 00:15:30.851 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:30.851 "listen_address": { 00:15:30.851 "trtype": "TCP", 00:15:30.851 "adrfam": "IPv4", 00:15:30.851 "traddr": "10.0.0.2", 00:15:30.851 "trsvcid": "4420" 00:15:30.851 }, 00:15:30.851 "peer_address": { 00:15:30.851 "trtype": "TCP", 00:15:30.851 "adrfam": "IPv4", 00:15:30.851 "traddr": "10.0.0.1", 00:15:30.851 "trsvcid": "43954" 00:15:30.851 }, 00:15:30.851 "auth": { 00:15:30.851 "state": "completed", 00:15:30.851 "digest": "sha384", 00:15:30.851 "dhgroup": "null" 00:15:30.851 } 00:15:30.851 } 00:15:30.851 ]' 00:15:30.851 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:30.851 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:30.851 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:30.851 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:30.851 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:30.851 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:30.851 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:30.851 08:59:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:31.110 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:15:31.111 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:15:31.679 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:31.679 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:31.679 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:31.679 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.679 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.679 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.679 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:31.679 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:31.679 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:31.937 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:15:31.937 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:31.937 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:31.937 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:31.937 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:31.937 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.937 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:31.937 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:31.937 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.937 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:31.937 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:31.937 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:31.937 08:59:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:32.195 00:15:32.195 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:32.196 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:32.196 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.454 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.454 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.454 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:32.454 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.454 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:32.454 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:32.454 { 00:15:32.454 "cntlid": 55, 00:15:32.454 "qid": 0, 00:15:32.454 "state": "enabled", 00:15:32.454 "thread": "nvmf_tgt_poll_group_000", 00:15:32.454 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:32.454 "listen_address": { 00:15:32.454 "trtype": "TCP", 00:15:32.454 "adrfam": "IPv4", 00:15:32.454 "traddr": "10.0.0.2", 00:15:32.454 "trsvcid": "4420" 00:15:32.454 }, 00:15:32.454 "peer_address": { 00:15:32.454 "trtype": "TCP", 00:15:32.454 "adrfam": "IPv4", 00:15:32.454 "traddr": "10.0.0.1", 00:15:32.454 "trsvcid": "43984" 00:15:32.454 }, 00:15:32.454 "auth": { 00:15:32.454 "state": "completed", 00:15:32.454 "digest": "sha384", 00:15:32.454 "dhgroup": "null" 00:15:32.454 } 00:15:32.454 } 00:15:32.454 ]' 00:15:32.454 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:32.454 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:32.454 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:32.454 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:32.455 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:32.455 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.455 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.455 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.713 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:15:32.713 08:59:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:15:33.281 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.281 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.281 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:33.281 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.282 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.282 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.282 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:33.282 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:33.282 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:33.282 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:33.540 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:15:33.540 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:33.540 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:33.540 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:33.540 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:33.540 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.540 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.540 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.540 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.540 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.540 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.540 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.540 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:33.798 00:15:33.798 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:33.798 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:33.798 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:33.798 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:33.798 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:33.798 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:33.798 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.798 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:33.798 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:33.798 { 00:15:33.798 "cntlid": 57, 00:15:33.798 "qid": 0, 00:15:33.798 "state": "enabled", 00:15:33.798 "thread": "nvmf_tgt_poll_group_000", 00:15:33.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:33.798 "listen_address": { 00:15:33.798 "trtype": "TCP", 00:15:33.798 "adrfam": "IPv4", 00:15:33.798 "traddr": "10.0.0.2", 00:15:33.798 "trsvcid": "4420" 00:15:33.798 }, 00:15:33.798 "peer_address": { 00:15:33.798 "trtype": "TCP", 00:15:33.798 "adrfam": "IPv4", 00:15:33.798 "traddr": "10.0.0.1", 00:15:33.798 "trsvcid": "44012" 00:15:33.798 }, 00:15:33.798 "auth": { 00:15:33.798 "state": "completed", 00:15:33.798 "digest": "sha384", 00:15:33.798 "dhgroup": "ffdhe2048" 00:15:33.798 } 00:15:33.798 } 00:15:33.798 ]' 00:15:33.798 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:34.057 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:34.057 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:34.057 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:34.057 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:34.057 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.057 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.057 08:59:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.316 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:15:34.316 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:15:34.888 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:34.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:34.888 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:34.888 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:34.888 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.888 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:34.889 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:34.889 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:34.889 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:35.240 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:15:35.240 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:35.240 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:35.240 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:35.241 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:35.241 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:35.241 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.241 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.241 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.241 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.241 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.241 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.241 08:59:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:35.241 00:15:35.241 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:35.241 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:35.241 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:35.509 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:35.509 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:35.509 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:35.509 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:35.509 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:35.509 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:35.509 { 00:15:35.509 "cntlid": 59, 00:15:35.509 "qid": 0, 00:15:35.509 "state": "enabled", 00:15:35.509 "thread": "nvmf_tgt_poll_group_000", 00:15:35.509 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:35.509 "listen_address": { 00:15:35.509 "trtype": "TCP", 00:15:35.509 "adrfam": "IPv4", 00:15:35.509 "traddr": "10.0.0.2", 00:15:35.509 "trsvcid": "4420" 00:15:35.509 }, 00:15:35.509 "peer_address": { 00:15:35.509 "trtype": "TCP", 00:15:35.509 "adrfam": "IPv4", 00:15:35.509 "traddr": "10.0.0.1", 00:15:35.509 "trsvcid": "44038" 00:15:35.509 }, 00:15:35.509 "auth": { 00:15:35.509 "state": "completed", 00:15:35.509 "digest": "sha384", 00:15:35.509 "dhgroup": "ffdhe2048" 00:15:35.509 } 00:15:35.509 } 00:15:35.509 ]' 00:15:35.509 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:35.509 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:35.509 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:35.509 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:35.509 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:35.768 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:35.768 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:35.768 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:35.768 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:15:35.768 08:59:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:15:36.335 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:36.335 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:36.335 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:36.335 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.335 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.335 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.335 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:36.335 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:36.335 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:36.594 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:15:36.594 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:36.594 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:36.594 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:36.594 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:36.594 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.594 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.594 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.594 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.594 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.594 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.594 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.594 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.852 00:15:36.852 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:36.852 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:36.852 08:59:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.111 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.111 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.111 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.111 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.111 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.111 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:37.111 { 00:15:37.111 "cntlid": 61, 00:15:37.111 "qid": 0, 00:15:37.111 "state": "enabled", 00:15:37.111 "thread": "nvmf_tgt_poll_group_000", 00:15:37.111 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:37.111 "listen_address": { 00:15:37.111 "trtype": "TCP", 00:15:37.111 "adrfam": "IPv4", 00:15:37.111 "traddr": "10.0.0.2", 00:15:37.111 "trsvcid": "4420" 00:15:37.111 }, 00:15:37.111 "peer_address": { 00:15:37.111 "trtype": "TCP", 00:15:37.111 "adrfam": "IPv4", 00:15:37.111 "traddr": "10.0.0.1", 00:15:37.111 "trsvcid": "44060" 00:15:37.111 }, 00:15:37.111 "auth": { 00:15:37.111 "state": "completed", 00:15:37.111 "digest": "sha384", 00:15:37.111 "dhgroup": "ffdhe2048" 00:15:37.111 } 00:15:37.111 } 00:15:37.111 ]' 00:15:37.111 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:37.111 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:37.111 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:37.111 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:37.111 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:37.111 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.111 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.111 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.369 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:15:37.369 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:15:37.937 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:37.937 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:37.937 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:37.937 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:37.937 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.937 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:37.937 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:37.937 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:37.937 08:59:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:38.197 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:15:38.197 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:38.197 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:38.197 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:38.197 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:38.197 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.197 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:38.197 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.197 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.197 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.197 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:38.197 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:38.197 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:38.456 00:15:38.456 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:38.456 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:38.456 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:38.715 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:38.715 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:38.715 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:38.715 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.715 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:38.715 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:38.715 { 00:15:38.715 "cntlid": 63, 00:15:38.715 "qid": 0, 00:15:38.715 "state": "enabled", 00:15:38.715 "thread": "nvmf_tgt_poll_group_000", 00:15:38.715 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:38.715 "listen_address": { 00:15:38.715 "trtype": "TCP", 00:15:38.715 "adrfam": "IPv4", 00:15:38.715 "traddr": "10.0.0.2", 00:15:38.715 "trsvcid": "4420" 00:15:38.715 }, 00:15:38.715 "peer_address": { 00:15:38.715 "trtype": "TCP", 00:15:38.715 "adrfam": "IPv4", 00:15:38.715 "traddr": "10.0.0.1", 00:15:38.715 "trsvcid": "45654" 00:15:38.715 }, 00:15:38.715 "auth": { 00:15:38.715 "state": "completed", 00:15:38.715 "digest": "sha384", 00:15:38.715 "dhgroup": "ffdhe2048" 00:15:38.715 } 00:15:38.715 } 00:15:38.715 ]' 00:15:38.715 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:38.715 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:38.715 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:38.715 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:38.715 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:38.974 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:38.974 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:38.974 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:38.974 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:15:38.974 08:59:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:15:39.541 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:39.541 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:39.541 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:39.541 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.541 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.801 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.801 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:39.801 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:39.801 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:39.801 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:39.801 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:15:39.801 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:39.801 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:39.801 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:39.801 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:39.801 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:39.801 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.801 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:39.801 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.801 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:39.801 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.801 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:39.801 08:59:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:40.060 00:15:40.060 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:40.060 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:40.060 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:40.319 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:40.319 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:40.319 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:40.319 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.319 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:40.319 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:40.319 { 00:15:40.319 "cntlid": 65, 00:15:40.319 "qid": 0, 00:15:40.319 "state": "enabled", 00:15:40.319 "thread": "nvmf_tgt_poll_group_000", 00:15:40.319 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:40.319 "listen_address": { 00:15:40.319 "trtype": "TCP", 00:15:40.319 "adrfam": "IPv4", 00:15:40.319 "traddr": "10.0.0.2", 00:15:40.319 "trsvcid": "4420" 00:15:40.319 }, 00:15:40.319 "peer_address": { 00:15:40.319 "trtype": "TCP", 00:15:40.319 "adrfam": "IPv4", 00:15:40.319 "traddr": "10.0.0.1", 00:15:40.319 "trsvcid": "45690" 00:15:40.319 }, 00:15:40.319 "auth": { 00:15:40.319 "state": "completed", 00:15:40.319 "digest": "sha384", 00:15:40.319 "dhgroup": "ffdhe3072" 00:15:40.319 } 00:15:40.319 } 00:15:40.319 ]' 00:15:40.319 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:40.319 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:40.319 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:40.578 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:40.578 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:40.578 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:40.578 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:40.578 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.578 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:15:40.578 08:59:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:15:41.147 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:41.407 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:41.407 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:41.407 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.407 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.407 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.407 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:41.407 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:41.407 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:41.407 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:15:41.407 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:41.407 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:41.407 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:41.407 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:41.407 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.407 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.407 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.407 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.407 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.407 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.407 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.407 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:41.666 00:15:41.666 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:41.666 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:41.666 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:41.925 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:41.925 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:41.925 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:41.925 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.925 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:41.925 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:41.925 { 00:15:41.925 "cntlid": 67, 00:15:41.925 "qid": 0, 00:15:41.925 "state": "enabled", 00:15:41.925 "thread": "nvmf_tgt_poll_group_000", 00:15:41.925 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:41.925 "listen_address": { 00:15:41.925 "trtype": "TCP", 00:15:41.925 "adrfam": "IPv4", 00:15:41.925 "traddr": "10.0.0.2", 00:15:41.925 "trsvcid": "4420" 00:15:41.925 }, 00:15:41.925 "peer_address": { 00:15:41.925 "trtype": "TCP", 00:15:41.925 "adrfam": "IPv4", 00:15:41.925 "traddr": "10.0.0.1", 00:15:41.925 "trsvcid": "45718" 00:15:41.925 }, 00:15:41.925 "auth": { 00:15:41.925 "state": "completed", 00:15:41.925 "digest": "sha384", 00:15:41.925 "dhgroup": "ffdhe3072" 00:15:41.925 } 00:15:41.925 } 00:15:41.925 ]' 00:15:41.925 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:41.925 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:41.925 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:42.184 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:42.184 08:59:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:42.184 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.184 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.184 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.443 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:15:42.443 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:15:43.012 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.012 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.012 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:43.012 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.012 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.012 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.012 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:43.012 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:43.012 08:59:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:43.271 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:15:43.271 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:43.271 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:43.271 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:43.271 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:43.271 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:43.271 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.271 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.271 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.271 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.271 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.271 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.271 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:43.530 00:15:43.530 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:43.530 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:43.530 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:43.530 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:43.530 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:43.530 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.530 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.530 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.530 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:43.530 { 00:15:43.530 "cntlid": 69, 00:15:43.530 "qid": 0, 00:15:43.530 "state": "enabled", 00:15:43.530 "thread": "nvmf_tgt_poll_group_000", 00:15:43.530 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:43.530 "listen_address": { 00:15:43.530 "trtype": "TCP", 00:15:43.530 "adrfam": "IPv4", 00:15:43.530 "traddr": "10.0.0.2", 00:15:43.530 "trsvcid": "4420" 00:15:43.530 }, 00:15:43.530 "peer_address": { 00:15:43.530 "trtype": "TCP", 00:15:43.530 "adrfam": "IPv4", 00:15:43.530 "traddr": "10.0.0.1", 00:15:43.531 "trsvcid": "45734" 00:15:43.531 }, 00:15:43.531 "auth": { 00:15:43.531 "state": "completed", 00:15:43.531 "digest": "sha384", 00:15:43.531 "dhgroup": "ffdhe3072" 00:15:43.531 } 00:15:43.531 } 00:15:43.531 ]' 00:15:43.531 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:43.790 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:43.790 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:43.790 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:43.790 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:43.790 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:43.790 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:43.790 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:44.049 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:15:44.049 08:59:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:15:44.618 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:44.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:44.618 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:44.618 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.618 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.618 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.618 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.618 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:44.618 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:44.877 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:15:44.877 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.877 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:44.877 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:44.877 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:44.877 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.877 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:44.877 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.877 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.877 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.877 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:44.877 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:44.877 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:45.136 00:15:45.136 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:45.136 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:45.136 09:00:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.136 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.136 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.136 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.136 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.136 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.136 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:45.136 { 00:15:45.136 "cntlid": 71, 00:15:45.136 "qid": 0, 00:15:45.136 "state": "enabled", 00:15:45.136 "thread": "nvmf_tgt_poll_group_000", 00:15:45.136 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:45.136 "listen_address": { 00:15:45.136 "trtype": "TCP", 00:15:45.136 "adrfam": "IPv4", 00:15:45.136 "traddr": "10.0.0.2", 00:15:45.136 "trsvcid": "4420" 00:15:45.136 }, 00:15:45.136 "peer_address": { 00:15:45.136 "trtype": "TCP", 00:15:45.136 "adrfam": "IPv4", 00:15:45.136 "traddr": "10.0.0.1", 00:15:45.136 "trsvcid": "45762" 00:15:45.136 }, 00:15:45.136 "auth": { 00:15:45.136 "state": "completed", 00:15:45.136 "digest": "sha384", 00:15:45.136 "dhgroup": "ffdhe3072" 00:15:45.136 } 00:15:45.136 } 00:15:45.136 ]' 00:15:45.136 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:45.395 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:45.395 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:45.395 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:45.395 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:45.395 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.395 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.395 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.654 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:15:45.654 09:00:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:15:46.221 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.221 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.221 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:46.221 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.221 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.221 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.221 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:46.221 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:46.221 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:46.221 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:46.481 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:15:46.481 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:46.481 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:46.481 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:46.481 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:46.481 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.481 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.481 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.481 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.481 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.481 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.481 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.481 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:46.740 00:15:46.740 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.740 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.740 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.999 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.999 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.999 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.999 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.999 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.999 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.999 { 00:15:46.999 "cntlid": 73, 00:15:46.999 "qid": 0, 00:15:46.999 "state": "enabled", 00:15:46.999 "thread": "nvmf_tgt_poll_group_000", 00:15:46.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:46.999 "listen_address": { 00:15:46.999 "trtype": "TCP", 00:15:46.999 "adrfam": "IPv4", 00:15:46.999 "traddr": "10.0.0.2", 00:15:46.999 "trsvcid": "4420" 00:15:46.999 }, 00:15:46.999 "peer_address": { 00:15:46.999 "trtype": "TCP", 00:15:46.999 "adrfam": "IPv4", 00:15:46.999 "traddr": "10.0.0.1", 00:15:46.999 "trsvcid": "45788" 00:15:46.999 }, 00:15:46.999 "auth": { 00:15:46.999 "state": "completed", 00:15:46.999 "digest": "sha384", 00:15:46.999 "dhgroup": "ffdhe4096" 00:15:46.999 } 00:15:46.999 } 00:15:46.999 ]' 00:15:46.999 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.999 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:46.999 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.999 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:46.999 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.999 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.999 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.999 09:00:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:47.259 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:15:47.259 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:15:47.826 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.826 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:47.826 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.826 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.826 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.826 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.826 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:47.826 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:48.085 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:15:48.085 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.085 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:48.085 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:48.085 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:48.085 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.085 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.085 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.085 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.085 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.085 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.085 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.085 09:00:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:48.343 00:15:48.343 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:48.343 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:48.343 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:48.601 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:48.601 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:48.601 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.601 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.601 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.601 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:48.601 { 00:15:48.601 "cntlid": 75, 00:15:48.601 "qid": 0, 00:15:48.601 "state": "enabled", 00:15:48.601 "thread": "nvmf_tgt_poll_group_000", 00:15:48.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:48.601 "listen_address": { 00:15:48.601 "trtype": "TCP", 00:15:48.601 "adrfam": "IPv4", 00:15:48.601 "traddr": "10.0.0.2", 00:15:48.601 "trsvcid": "4420" 00:15:48.601 }, 00:15:48.601 "peer_address": { 00:15:48.601 "trtype": "TCP", 00:15:48.601 "adrfam": "IPv4", 00:15:48.601 "traddr": "10.0.0.1", 00:15:48.601 "trsvcid": "48170" 00:15:48.601 }, 00:15:48.601 "auth": { 00:15:48.601 "state": "completed", 00:15:48.601 "digest": "sha384", 00:15:48.601 "dhgroup": "ffdhe4096" 00:15:48.601 } 00:15:48.601 } 00:15:48.601 ]' 00:15:48.601 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:48.601 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:48.601 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:48.601 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:48.601 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:48.601 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.601 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.601 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.860 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:15:48.860 09:00:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:15:49.428 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.428 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.428 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:49.428 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.428 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.428 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.428 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:49.428 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:49.428 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:49.687 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:15:49.687 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:49.687 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:49.687 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:49.687 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:49.687 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.687 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.687 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.687 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.687 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.687 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.687 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.687 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:49.961 00:15:49.961 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.961 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.961 09:00:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.220 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.220 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.220 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.220 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.220 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.220 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:50.220 { 00:15:50.220 "cntlid": 77, 00:15:50.220 "qid": 0, 00:15:50.220 "state": "enabled", 00:15:50.220 "thread": "nvmf_tgt_poll_group_000", 00:15:50.220 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:50.220 "listen_address": { 00:15:50.220 "trtype": "TCP", 00:15:50.220 "adrfam": "IPv4", 00:15:50.220 "traddr": "10.0.0.2", 00:15:50.220 "trsvcid": "4420" 00:15:50.220 }, 00:15:50.220 "peer_address": { 00:15:50.220 "trtype": "TCP", 00:15:50.220 "adrfam": "IPv4", 00:15:50.220 "traddr": "10.0.0.1", 00:15:50.220 "trsvcid": "48202" 00:15:50.220 }, 00:15:50.220 "auth": { 00:15:50.220 "state": "completed", 00:15:50.220 "digest": "sha384", 00:15:50.220 "dhgroup": "ffdhe4096" 00:15:50.220 } 00:15:50.220 } 00:15:50.220 ]' 00:15:50.220 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:50.220 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:50.220 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:50.220 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:50.221 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:50.221 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.221 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.221 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:50.479 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:15:50.479 09:00:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:15:51.046 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.046 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.046 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:51.046 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.046 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.046 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.046 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.046 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:51.046 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:51.305 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:15:51.305 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:51.305 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:51.305 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:15:51.305 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:51.305 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:51.305 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:51.305 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.305 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.305 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.305 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:51.305 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:51.305 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:51.564 00:15:51.564 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:51.564 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:51.564 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.822 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.822 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.822 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.822 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.822 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.822 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.822 { 00:15:51.822 "cntlid": 79, 00:15:51.822 "qid": 0, 00:15:51.822 "state": "enabled", 00:15:51.822 "thread": "nvmf_tgt_poll_group_000", 00:15:51.822 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:51.822 "listen_address": { 00:15:51.822 "trtype": "TCP", 00:15:51.822 "adrfam": "IPv4", 00:15:51.822 "traddr": "10.0.0.2", 00:15:51.822 "trsvcid": "4420" 00:15:51.822 }, 00:15:51.822 "peer_address": { 00:15:51.822 "trtype": "TCP", 00:15:51.822 "adrfam": "IPv4", 00:15:51.822 "traddr": "10.0.0.1", 00:15:51.822 "trsvcid": "48232" 00:15:51.822 }, 00:15:51.822 "auth": { 00:15:51.822 "state": "completed", 00:15:51.822 "digest": "sha384", 00:15:51.822 "dhgroup": "ffdhe4096" 00:15:51.822 } 00:15:51.822 } 00:15:51.822 ]' 00:15:51.822 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.822 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:51.822 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.822 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:52.080 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.080 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.080 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.080 09:00:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.080 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:15:52.080 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:15:53.018 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.018 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.018 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:53.018 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.018 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.018 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.018 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:53.018 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.018 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:53.018 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:53.018 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:15:53.018 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.018 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:53.018 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:53.018 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:53.018 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.018 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.018 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.018 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.018 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.018 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.018 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.018 09:00:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:53.278 00:15:53.538 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.538 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.538 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:53.538 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.538 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.538 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.538 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.538 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.538 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:53.538 { 00:15:53.538 "cntlid": 81, 00:15:53.538 "qid": 0, 00:15:53.538 "state": "enabled", 00:15:53.538 "thread": "nvmf_tgt_poll_group_000", 00:15:53.538 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:53.538 "listen_address": { 00:15:53.538 "trtype": "TCP", 00:15:53.538 "adrfam": "IPv4", 00:15:53.538 "traddr": "10.0.0.2", 00:15:53.538 "trsvcid": "4420" 00:15:53.538 }, 00:15:53.538 "peer_address": { 00:15:53.538 "trtype": "TCP", 00:15:53.538 "adrfam": "IPv4", 00:15:53.538 "traddr": "10.0.0.1", 00:15:53.538 "trsvcid": "48262" 00:15:53.538 }, 00:15:53.538 "auth": { 00:15:53.538 "state": "completed", 00:15:53.538 "digest": "sha384", 00:15:53.538 "dhgroup": "ffdhe6144" 00:15:53.538 } 00:15:53.538 } 00:15:53.538 ]' 00:15:53.538 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:53.797 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:53.797 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:53.797 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:53.797 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:53.797 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.797 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.797 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.058 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:15:54.058 09:00:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:15:54.625 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.625 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:54.625 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.625 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.625 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.625 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:54.625 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:54.625 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:54.885 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:15:54.885 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:54.885 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:54.885 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:54.885 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:54.885 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.885 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.885 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.885 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.885 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.885 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.885 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.885 09:00:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:55.144 00:15:55.144 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.144 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.144 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.404 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.404 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.404 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.404 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.404 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.404 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.404 { 00:15:55.404 "cntlid": 83, 00:15:55.404 "qid": 0, 00:15:55.404 "state": "enabled", 00:15:55.404 "thread": "nvmf_tgt_poll_group_000", 00:15:55.404 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:55.404 "listen_address": { 00:15:55.404 "trtype": "TCP", 00:15:55.404 "adrfam": "IPv4", 00:15:55.404 "traddr": "10.0.0.2", 00:15:55.404 "trsvcid": "4420" 00:15:55.404 }, 00:15:55.404 "peer_address": { 00:15:55.404 "trtype": "TCP", 00:15:55.404 "adrfam": "IPv4", 00:15:55.404 "traddr": "10.0.0.1", 00:15:55.404 "trsvcid": "48292" 00:15:55.404 }, 00:15:55.404 "auth": { 00:15:55.404 "state": "completed", 00:15:55.404 "digest": "sha384", 00:15:55.404 "dhgroup": "ffdhe6144" 00:15:55.404 } 00:15:55.404 } 00:15:55.404 ]' 00:15:55.404 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.404 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:55.404 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.404 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:55.404 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.404 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.404 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.404 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.663 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:15:55.663 09:00:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:15:56.231 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.231 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:56.231 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.231 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.231 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.231 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.231 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:56.231 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:56.491 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:15:56.491 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.491 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:56.491 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:56.491 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:56.491 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.491 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.491 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.491 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.491 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.491 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.491 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.491 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.751 00:15:56.751 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:56.751 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:56.751 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.010 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.010 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.010 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.010 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.010 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.010 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.010 { 00:15:57.010 "cntlid": 85, 00:15:57.010 "qid": 0, 00:15:57.010 "state": "enabled", 00:15:57.010 "thread": "nvmf_tgt_poll_group_000", 00:15:57.010 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:57.010 "listen_address": { 00:15:57.010 "trtype": "TCP", 00:15:57.010 "adrfam": "IPv4", 00:15:57.010 "traddr": "10.0.0.2", 00:15:57.010 "trsvcid": "4420" 00:15:57.010 }, 00:15:57.010 "peer_address": { 00:15:57.010 "trtype": "TCP", 00:15:57.010 "adrfam": "IPv4", 00:15:57.010 "traddr": "10.0.0.1", 00:15:57.010 "trsvcid": "48318" 00:15:57.010 }, 00:15:57.010 "auth": { 00:15:57.010 "state": "completed", 00:15:57.010 "digest": "sha384", 00:15:57.010 "dhgroup": "ffdhe6144" 00:15:57.010 } 00:15:57.010 } 00:15:57.010 ]' 00:15:57.010 09:00:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.010 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:57.010 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.268 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:57.268 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.268 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.268 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.268 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.527 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:15:57.527 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:15:58.095 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.095 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.095 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:58.095 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.095 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.095 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.095 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.095 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:58.095 09:00:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:58.095 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:15:58.095 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.095 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:58.095 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:15:58.095 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:58.095 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.095 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:15:58.095 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.095 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.095 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.095 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:58.095 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:58.096 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:58.665 00:15:58.665 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.665 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.665 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.665 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.665 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.665 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.665 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.665 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.665 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.665 { 00:15:58.665 "cntlid": 87, 00:15:58.665 "qid": 0, 00:15:58.665 "state": "enabled", 00:15:58.665 "thread": "nvmf_tgt_poll_group_000", 00:15:58.665 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:15:58.665 "listen_address": { 00:15:58.665 "trtype": "TCP", 00:15:58.665 "adrfam": "IPv4", 00:15:58.665 "traddr": "10.0.0.2", 00:15:58.665 "trsvcid": "4420" 00:15:58.665 }, 00:15:58.665 "peer_address": { 00:15:58.665 "trtype": "TCP", 00:15:58.665 "adrfam": "IPv4", 00:15:58.665 "traddr": "10.0.0.1", 00:15:58.665 "trsvcid": "57654" 00:15:58.665 }, 00:15:58.665 "auth": { 00:15:58.665 "state": "completed", 00:15:58.665 "digest": "sha384", 00:15:58.665 "dhgroup": "ffdhe6144" 00:15:58.665 } 00:15:58.665 } 00:15:58.665 ]' 00:15:58.665 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:58.665 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:58.665 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:58.924 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:58.924 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:58.924 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:58.924 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:58.924 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.183 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:15:59.183 09:00:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:15:59.752 09:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.752 09:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:59.752 09:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.752 09:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.752 09:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.752 09:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:59.752 09:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.752 09:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:59.752 09:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:59.752 09:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:15:59.752 09:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.752 09:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:15:59.752 09:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:15:59.752 09:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:59.752 09:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.752 09:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.752 09:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.752 09:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.752 09:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.752 09:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.753 09:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:59.753 09:00:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:00.321 00:16:00.322 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.322 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.322 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.581 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.581 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.581 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.581 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.581 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.581 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.581 { 00:16:00.581 "cntlid": 89, 00:16:00.581 "qid": 0, 00:16:00.581 "state": "enabled", 00:16:00.581 "thread": "nvmf_tgt_poll_group_000", 00:16:00.581 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:00.581 "listen_address": { 00:16:00.581 "trtype": "TCP", 00:16:00.581 "adrfam": "IPv4", 00:16:00.581 "traddr": "10.0.0.2", 00:16:00.581 "trsvcid": "4420" 00:16:00.581 }, 00:16:00.581 "peer_address": { 00:16:00.581 "trtype": "TCP", 00:16:00.581 "adrfam": "IPv4", 00:16:00.581 "traddr": "10.0.0.1", 00:16:00.581 "trsvcid": "57692" 00:16:00.581 }, 00:16:00.581 "auth": { 00:16:00.581 "state": "completed", 00:16:00.581 "digest": "sha384", 00:16:00.581 "dhgroup": "ffdhe8192" 00:16:00.581 } 00:16:00.581 } 00:16:00.581 ]' 00:16:00.581 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.581 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:00.581 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.581 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:00.581 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.581 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.581 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.581 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.841 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:16:00.841 09:00:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:16:01.408 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.408 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.408 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:01.408 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.408 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.408 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.408 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.408 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:01.408 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:01.666 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:01.666 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.666 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:01.666 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:01.666 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:01.666 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.666 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.666 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.666 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.666 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.666 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.666 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:01.666 09:00:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:02.234 00:16:02.234 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:02.234 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:02.234 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:02.493 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:02.493 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:02.493 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:02.493 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:02.493 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:02.493 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:02.493 { 00:16:02.493 "cntlid": 91, 00:16:02.493 "qid": 0, 00:16:02.493 "state": "enabled", 00:16:02.493 "thread": "nvmf_tgt_poll_group_000", 00:16:02.493 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:02.493 "listen_address": { 00:16:02.493 "trtype": "TCP", 00:16:02.493 "adrfam": "IPv4", 00:16:02.493 "traddr": "10.0.0.2", 00:16:02.493 "trsvcid": "4420" 00:16:02.493 }, 00:16:02.493 "peer_address": { 00:16:02.493 "trtype": "TCP", 00:16:02.493 "adrfam": "IPv4", 00:16:02.493 "traddr": "10.0.0.1", 00:16:02.493 "trsvcid": "57706" 00:16:02.493 }, 00:16:02.493 "auth": { 00:16:02.493 "state": "completed", 00:16:02.493 "digest": "sha384", 00:16:02.493 "dhgroup": "ffdhe8192" 00:16:02.493 } 00:16:02.493 } 00:16:02.493 ]' 00:16:02.493 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.493 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:02.493 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.493 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:02.493 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.493 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.493 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.493 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.752 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:16:02.752 09:00:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:16:03.320 09:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.320 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.320 09:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:03.320 09:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.320 09:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.320 09:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.320 09:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.320 09:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:03.320 09:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:03.578 09:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:03.578 09:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.578 09:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:03.578 09:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:03.578 09:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:03.578 09:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.578 09:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.578 09:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.578 09:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.578 09:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.579 09:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.579 09:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:03.579 09:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:04.147 00:16:04.147 09:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:04.147 09:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:04.147 09:00:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.147 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.147 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.147 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.147 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.147 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.147 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:04.147 { 00:16:04.147 "cntlid": 93, 00:16:04.147 "qid": 0, 00:16:04.147 "state": "enabled", 00:16:04.147 "thread": "nvmf_tgt_poll_group_000", 00:16:04.147 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:04.147 "listen_address": { 00:16:04.147 "trtype": "TCP", 00:16:04.147 "adrfam": "IPv4", 00:16:04.147 "traddr": "10.0.0.2", 00:16:04.147 "trsvcid": "4420" 00:16:04.147 }, 00:16:04.147 "peer_address": { 00:16:04.147 "trtype": "TCP", 00:16:04.147 "adrfam": "IPv4", 00:16:04.147 "traddr": "10.0.0.1", 00:16:04.147 "trsvcid": "57736" 00:16:04.147 }, 00:16:04.147 "auth": { 00:16:04.147 "state": "completed", 00:16:04.147 "digest": "sha384", 00:16:04.147 "dhgroup": "ffdhe8192" 00:16:04.147 } 00:16:04.147 } 00:16:04.147 ]' 00:16:04.147 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:04.405 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:04.405 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:04.405 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:04.405 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:04.405 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.405 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.405 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.664 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:16:04.664 09:00:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:16:05.231 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.231 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.231 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:05.231 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.231 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.231 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.231 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:05.231 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:05.231 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:05.231 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:16:05.231 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:05.231 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:05.231 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:05.231 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:05.231 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.231 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:05.231 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.231 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.490 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.490 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:05.490 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:05.490 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:05.749 00:16:05.749 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.749 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.749 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.008 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.008 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.008 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.008 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.008 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.008 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.008 { 00:16:06.008 "cntlid": 95, 00:16:06.008 "qid": 0, 00:16:06.008 "state": "enabled", 00:16:06.008 "thread": "nvmf_tgt_poll_group_000", 00:16:06.008 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:06.008 "listen_address": { 00:16:06.008 "trtype": "TCP", 00:16:06.008 "adrfam": "IPv4", 00:16:06.008 "traddr": "10.0.0.2", 00:16:06.008 "trsvcid": "4420" 00:16:06.008 }, 00:16:06.008 "peer_address": { 00:16:06.008 "trtype": "TCP", 00:16:06.008 "adrfam": "IPv4", 00:16:06.008 "traddr": "10.0.0.1", 00:16:06.008 "trsvcid": "57762" 00:16:06.008 }, 00:16:06.008 "auth": { 00:16:06.008 "state": "completed", 00:16:06.008 "digest": "sha384", 00:16:06.008 "dhgroup": "ffdhe8192" 00:16:06.008 } 00:16:06.008 } 00:16:06.008 ]' 00:16:06.008 09:00:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.008 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:06.008 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:06.267 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:06.267 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:06.267 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.267 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.267 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.525 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:16:06.525 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:16:07.094 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.094 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:07.094 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.094 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.094 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.094 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:07.094 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:07.094 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.094 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:07.094 09:00:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:07.094 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:16:07.094 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.094 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:07.094 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:07.094 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:07.094 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.094 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.094 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.094 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.094 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.094 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.094 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.094 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:07.353 00:16:07.353 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:07.353 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:07.353 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:07.612 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:07.612 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:07.612 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.612 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.612 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.612 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:07.612 { 00:16:07.612 "cntlid": 97, 00:16:07.612 "qid": 0, 00:16:07.612 "state": "enabled", 00:16:07.612 "thread": "nvmf_tgt_poll_group_000", 00:16:07.612 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:07.612 "listen_address": { 00:16:07.612 "trtype": "TCP", 00:16:07.612 "adrfam": "IPv4", 00:16:07.612 "traddr": "10.0.0.2", 00:16:07.612 "trsvcid": "4420" 00:16:07.612 }, 00:16:07.612 "peer_address": { 00:16:07.612 "trtype": "TCP", 00:16:07.612 "adrfam": "IPv4", 00:16:07.612 "traddr": "10.0.0.1", 00:16:07.612 "trsvcid": "57790" 00:16:07.612 }, 00:16:07.612 "auth": { 00:16:07.612 "state": "completed", 00:16:07.612 "digest": "sha512", 00:16:07.612 "dhgroup": "null" 00:16:07.612 } 00:16:07.612 } 00:16:07.612 ]' 00:16:07.612 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:07.612 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:07.612 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.870 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:07.870 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.870 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.870 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.870 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.150 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:16:08.150 09:00:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:16:08.717 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:08.717 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:08.717 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:08.717 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.717 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.717 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.717 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:08.717 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:08.717 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:08.717 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:16:08.717 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:08.717 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:08.717 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:08.717 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:08.717 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.717 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.717 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.717 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.717 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.717 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.717 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.717 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:08.975 00:16:08.975 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.975 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.975 09:00:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:09.233 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:09.233 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:09.233 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.233 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.233 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.233 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:09.233 { 00:16:09.233 "cntlid": 99, 00:16:09.233 "qid": 0, 00:16:09.233 "state": "enabled", 00:16:09.233 "thread": "nvmf_tgt_poll_group_000", 00:16:09.233 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:09.233 "listen_address": { 00:16:09.233 "trtype": "TCP", 00:16:09.233 "adrfam": "IPv4", 00:16:09.233 "traddr": "10.0.0.2", 00:16:09.233 "trsvcid": "4420" 00:16:09.233 }, 00:16:09.233 "peer_address": { 00:16:09.233 "trtype": "TCP", 00:16:09.233 "adrfam": "IPv4", 00:16:09.233 "traddr": "10.0.0.1", 00:16:09.233 "trsvcid": "59594" 00:16:09.233 }, 00:16:09.233 "auth": { 00:16:09.233 "state": "completed", 00:16:09.233 "digest": "sha512", 00:16:09.233 "dhgroup": "null" 00:16:09.233 } 00:16:09.233 } 00:16:09.233 ]' 00:16:09.233 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:09.233 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:09.233 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:09.233 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:09.233 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:09.233 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:09.233 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:09.233 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.491 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:16:09.491 09:00:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:16:10.059 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:10.059 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:10.059 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:10.059 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.059 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.059 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.059 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:10.059 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:10.059 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:10.318 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:16:10.318 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:10.318 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:10.318 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:10.318 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:10.318 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.318 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.318 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.318 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.318 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.318 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.318 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.318 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:10.576 00:16:10.576 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.576 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.576 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.836 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.836 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.836 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.836 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.836 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.836 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.836 { 00:16:10.836 "cntlid": 101, 00:16:10.836 "qid": 0, 00:16:10.837 "state": "enabled", 00:16:10.837 "thread": "nvmf_tgt_poll_group_000", 00:16:10.837 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:10.837 "listen_address": { 00:16:10.837 "trtype": "TCP", 00:16:10.837 "adrfam": "IPv4", 00:16:10.837 "traddr": "10.0.0.2", 00:16:10.837 "trsvcid": "4420" 00:16:10.837 }, 00:16:10.837 "peer_address": { 00:16:10.837 "trtype": "TCP", 00:16:10.837 "adrfam": "IPv4", 00:16:10.837 "traddr": "10.0.0.1", 00:16:10.837 "trsvcid": "59628" 00:16:10.837 }, 00:16:10.837 "auth": { 00:16:10.837 "state": "completed", 00:16:10.837 "digest": "sha512", 00:16:10.837 "dhgroup": "null" 00:16:10.837 } 00:16:10.837 } 00:16:10.837 ]' 00:16:10.837 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.837 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:10.837 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.837 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:10.837 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.837 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.837 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.837 09:00:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.095 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:16:11.095 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:16:11.664 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.664 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.664 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:11.664 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.664 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.664 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.664 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.664 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:11.664 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:16:11.922 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:16:11.922 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.922 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:11.922 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:11.922 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:11.922 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.922 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:11.922 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.923 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.923 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.923 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:11.923 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:11.923 09:00:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:12.181 00:16:12.181 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:12.181 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:12.181 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.440 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.440 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.440 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.440 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.440 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.440 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:12.440 { 00:16:12.440 "cntlid": 103, 00:16:12.440 "qid": 0, 00:16:12.440 "state": "enabled", 00:16:12.440 "thread": "nvmf_tgt_poll_group_000", 00:16:12.440 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:12.440 "listen_address": { 00:16:12.440 "trtype": "TCP", 00:16:12.440 "adrfam": "IPv4", 00:16:12.440 "traddr": "10.0.0.2", 00:16:12.440 "trsvcid": "4420" 00:16:12.440 }, 00:16:12.440 "peer_address": { 00:16:12.440 "trtype": "TCP", 00:16:12.440 "adrfam": "IPv4", 00:16:12.440 "traddr": "10.0.0.1", 00:16:12.440 "trsvcid": "59658" 00:16:12.440 }, 00:16:12.440 "auth": { 00:16:12.440 "state": "completed", 00:16:12.440 "digest": "sha512", 00:16:12.440 "dhgroup": "null" 00:16:12.440 } 00:16:12.440 } 00:16:12.440 ]' 00:16:12.440 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:12.441 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:12.441 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:12.441 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:12.441 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:12.441 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:12.441 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:12.441 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:12.699 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:16:12.699 09:00:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:16:13.266 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:13.266 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:13.266 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:13.266 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.266 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.266 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.266 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:13.266 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:13.266 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:13.266 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:13.525 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:16:13.525 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:13.525 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:13.525 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:13.525 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:13.525 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:13.525 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.525 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.525 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.525 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.525 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.525 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.525 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:13.783 00:16:13.783 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.783 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.783 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.042 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:14.042 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:14.042 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.042 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.042 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.042 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:14.042 { 00:16:14.042 "cntlid": 105, 00:16:14.042 "qid": 0, 00:16:14.042 "state": "enabled", 00:16:14.042 "thread": "nvmf_tgt_poll_group_000", 00:16:14.042 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:14.042 "listen_address": { 00:16:14.042 "trtype": "TCP", 00:16:14.042 "adrfam": "IPv4", 00:16:14.042 "traddr": "10.0.0.2", 00:16:14.042 "trsvcid": "4420" 00:16:14.042 }, 00:16:14.042 "peer_address": { 00:16:14.042 "trtype": "TCP", 00:16:14.042 "adrfam": "IPv4", 00:16:14.042 "traddr": "10.0.0.1", 00:16:14.042 "trsvcid": "59688" 00:16:14.042 }, 00:16:14.042 "auth": { 00:16:14.042 "state": "completed", 00:16:14.042 "digest": "sha512", 00:16:14.042 "dhgroup": "ffdhe2048" 00:16:14.042 } 00:16:14.042 } 00:16:14.042 ]' 00:16:14.042 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:14.042 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:14.042 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:14.042 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:14.042 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:14.042 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:14.042 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:14.042 09:00:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:14.301 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:16:14.301 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:16:14.869 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.869 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:14.869 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.869 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.869 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.869 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.869 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:14.869 09:00:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:15.127 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:16:15.127 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:15.127 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:15.127 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:15.127 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:15.127 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:15.127 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.127 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.127 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.127 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.127 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.127 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.127 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:15.385 00:16:15.386 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:15.386 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.386 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:15.644 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.644 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.644 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.644 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.644 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.644 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.644 { 00:16:15.644 "cntlid": 107, 00:16:15.644 "qid": 0, 00:16:15.644 "state": "enabled", 00:16:15.644 "thread": "nvmf_tgt_poll_group_000", 00:16:15.644 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:15.644 "listen_address": { 00:16:15.644 "trtype": "TCP", 00:16:15.644 "adrfam": "IPv4", 00:16:15.644 "traddr": "10.0.0.2", 00:16:15.644 "trsvcid": "4420" 00:16:15.644 }, 00:16:15.644 "peer_address": { 00:16:15.644 "trtype": "TCP", 00:16:15.644 "adrfam": "IPv4", 00:16:15.644 "traddr": "10.0.0.1", 00:16:15.644 "trsvcid": "59708" 00:16:15.644 }, 00:16:15.644 "auth": { 00:16:15.644 "state": "completed", 00:16:15.644 "digest": "sha512", 00:16:15.644 "dhgroup": "ffdhe2048" 00:16:15.644 } 00:16:15.644 } 00:16:15.644 ]' 00:16:15.644 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.645 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.645 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.645 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:15.645 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.645 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.645 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.645 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.904 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:16:15.904 09:00:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:16:16.472 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.472 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:16.472 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.472 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.472 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.472 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.472 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:16.472 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:16.784 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:16:16.784 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.784 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:16.784 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:16.784 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:16.784 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.784 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.784 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.784 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.784 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.784 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.784 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:16.784 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:17.150 00:16:17.150 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:17.150 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.150 09:00:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:17.150 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.150 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.150 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.150 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.150 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.150 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:17.150 { 00:16:17.150 "cntlid": 109, 00:16:17.150 "qid": 0, 00:16:17.150 "state": "enabled", 00:16:17.150 "thread": "nvmf_tgt_poll_group_000", 00:16:17.150 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:17.150 "listen_address": { 00:16:17.150 "trtype": "TCP", 00:16:17.150 "adrfam": "IPv4", 00:16:17.150 "traddr": "10.0.0.2", 00:16:17.150 "trsvcid": "4420" 00:16:17.150 }, 00:16:17.150 "peer_address": { 00:16:17.150 "trtype": "TCP", 00:16:17.150 "adrfam": "IPv4", 00:16:17.150 "traddr": "10.0.0.1", 00:16:17.150 "trsvcid": "59724" 00:16:17.150 }, 00:16:17.150 "auth": { 00:16:17.150 "state": "completed", 00:16:17.150 "digest": "sha512", 00:16:17.150 "dhgroup": "ffdhe2048" 00:16:17.150 } 00:16:17.150 } 00:16:17.150 ]' 00:16:17.150 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:17.150 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.150 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.150 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:17.150 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.455 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.455 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.455 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.455 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:16:17.455 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:16:18.022 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.022 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.022 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:18.022 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.022 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.022 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.022 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:18.022 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:18.022 09:00:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:18.281 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:16:18.281 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:18.281 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:18.281 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:18.281 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:18.281 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:18.281 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:18.281 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.281 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.281 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.281 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:18.281 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.281 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:18.539 00:16:18.539 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.539 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.539 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.798 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.798 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.798 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.798 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.798 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.798 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.798 { 00:16:18.798 "cntlid": 111, 00:16:18.798 "qid": 0, 00:16:18.798 "state": "enabled", 00:16:18.798 "thread": "nvmf_tgt_poll_group_000", 00:16:18.798 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:18.798 "listen_address": { 00:16:18.798 "trtype": "TCP", 00:16:18.798 "adrfam": "IPv4", 00:16:18.798 "traddr": "10.0.0.2", 00:16:18.798 "trsvcid": "4420" 00:16:18.798 }, 00:16:18.798 "peer_address": { 00:16:18.798 "trtype": "TCP", 00:16:18.798 "adrfam": "IPv4", 00:16:18.798 "traddr": "10.0.0.1", 00:16:18.798 "trsvcid": "40768" 00:16:18.798 }, 00:16:18.798 "auth": { 00:16:18.798 "state": "completed", 00:16:18.798 "digest": "sha512", 00:16:18.798 "dhgroup": "ffdhe2048" 00:16:18.798 } 00:16:18.798 } 00:16:18.798 ]' 00:16:18.798 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.798 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:18.798 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.799 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:18.799 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.799 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.799 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.799 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.057 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:16:19.057 09:00:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:16:19.624 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.624 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.624 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:19.624 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.624 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.624 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.624 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:19.624 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.624 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:19.624 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:19.882 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:16:19.882 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.882 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:19.882 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:19.882 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:19.882 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.882 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.882 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.883 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.883 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.883 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.883 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.883 09:00:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:20.141 00:16:20.141 09:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.141 09:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.142 09:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.400 09:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.400 09:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.400 09:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.400 09:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.400 09:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.400 09:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.400 { 00:16:20.400 "cntlid": 113, 00:16:20.400 "qid": 0, 00:16:20.400 "state": "enabled", 00:16:20.400 "thread": "nvmf_tgt_poll_group_000", 00:16:20.400 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:20.400 "listen_address": { 00:16:20.400 "trtype": "TCP", 00:16:20.400 "adrfam": "IPv4", 00:16:20.400 "traddr": "10.0.0.2", 00:16:20.400 "trsvcid": "4420" 00:16:20.400 }, 00:16:20.400 "peer_address": { 00:16:20.400 "trtype": "TCP", 00:16:20.400 "adrfam": "IPv4", 00:16:20.400 "traddr": "10.0.0.1", 00:16:20.400 "trsvcid": "40808" 00:16:20.400 }, 00:16:20.400 "auth": { 00:16:20.400 "state": "completed", 00:16:20.400 "digest": "sha512", 00:16:20.400 "dhgroup": "ffdhe3072" 00:16:20.400 } 00:16:20.400 } 00:16:20.400 ]' 00:16:20.400 09:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.400 09:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:20.400 09:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.400 09:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:20.400 09:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.400 09:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.400 09:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.400 09:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.659 09:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:16:20.659 09:00:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:16:21.226 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.226 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.226 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:21.226 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.226 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.226 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.226 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.226 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:21.226 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:21.485 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:16:21.485 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.485 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:21.485 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:21.485 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:21.485 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.485 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.485 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.485 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.485 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.485 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.485 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.485 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.744 00:16:21.744 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:21.744 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:21.744 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.003 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.003 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.003 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.003 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.003 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.003 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.003 { 00:16:22.003 "cntlid": 115, 00:16:22.003 "qid": 0, 00:16:22.003 "state": "enabled", 00:16:22.003 "thread": "nvmf_tgt_poll_group_000", 00:16:22.003 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:22.003 "listen_address": { 00:16:22.003 "trtype": "TCP", 00:16:22.003 "adrfam": "IPv4", 00:16:22.003 "traddr": "10.0.0.2", 00:16:22.003 "trsvcid": "4420" 00:16:22.003 }, 00:16:22.003 "peer_address": { 00:16:22.003 "trtype": "TCP", 00:16:22.003 "adrfam": "IPv4", 00:16:22.003 "traddr": "10.0.0.1", 00:16:22.003 "trsvcid": "40828" 00:16:22.003 }, 00:16:22.003 "auth": { 00:16:22.003 "state": "completed", 00:16:22.003 "digest": "sha512", 00:16:22.003 "dhgroup": "ffdhe3072" 00:16:22.003 } 00:16:22.003 } 00:16:22.003 ]' 00:16:22.003 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.003 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:22.003 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.003 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:22.003 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.003 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.003 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.003 09:00:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.262 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:16:22.262 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:16:22.830 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:22.830 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:22.830 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:22.830 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.830 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.830 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.830 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:22.830 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:22.830 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:23.089 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:16:23.089 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.089 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:23.089 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:23.089 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:23.089 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.089 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.089 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.089 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.089 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.089 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.089 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.089 09:00:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.348 00:16:23.348 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.348 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.348 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.607 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.607 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.607 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.607 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.607 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.607 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.607 { 00:16:23.607 "cntlid": 117, 00:16:23.607 "qid": 0, 00:16:23.607 "state": "enabled", 00:16:23.607 "thread": "nvmf_tgt_poll_group_000", 00:16:23.607 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:23.607 "listen_address": { 00:16:23.607 "trtype": "TCP", 00:16:23.607 "adrfam": "IPv4", 00:16:23.607 "traddr": "10.0.0.2", 00:16:23.607 "trsvcid": "4420" 00:16:23.607 }, 00:16:23.607 "peer_address": { 00:16:23.607 "trtype": "TCP", 00:16:23.607 "adrfam": "IPv4", 00:16:23.607 "traddr": "10.0.0.1", 00:16:23.607 "trsvcid": "40850" 00:16:23.607 }, 00:16:23.607 "auth": { 00:16:23.607 "state": "completed", 00:16:23.607 "digest": "sha512", 00:16:23.607 "dhgroup": "ffdhe3072" 00:16:23.607 } 00:16:23.607 } 00:16:23.607 ]' 00:16:23.607 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.607 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:23.607 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.607 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:23.607 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:23.607 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:23.607 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:23.607 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:23.867 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:16:23.867 09:00:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:16:24.435 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.435 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.435 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:24.435 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.435 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.435 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.435 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.435 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:24.435 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:24.693 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:16:24.693 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.693 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:24.693 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:24.693 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:24.693 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.693 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:24.693 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.693 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.693 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.693 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:24.693 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:24.693 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:24.953 00:16:24.953 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:24.953 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:24.953 09:00:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.211 09:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.211 09:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.211 09:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.211 09:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.211 09:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.211 09:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.211 { 00:16:25.211 "cntlid": 119, 00:16:25.211 "qid": 0, 00:16:25.211 "state": "enabled", 00:16:25.211 "thread": "nvmf_tgt_poll_group_000", 00:16:25.211 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:25.212 "listen_address": { 00:16:25.212 "trtype": "TCP", 00:16:25.212 "adrfam": "IPv4", 00:16:25.212 "traddr": "10.0.0.2", 00:16:25.212 "trsvcid": "4420" 00:16:25.212 }, 00:16:25.212 "peer_address": { 00:16:25.212 "trtype": "TCP", 00:16:25.212 "adrfam": "IPv4", 00:16:25.212 "traddr": "10.0.0.1", 00:16:25.212 "trsvcid": "40864" 00:16:25.212 }, 00:16:25.212 "auth": { 00:16:25.212 "state": "completed", 00:16:25.212 "digest": "sha512", 00:16:25.212 "dhgroup": "ffdhe3072" 00:16:25.212 } 00:16:25.212 } 00:16:25.212 ]' 00:16:25.212 09:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.212 09:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:25.212 09:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.212 09:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:25.212 09:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.212 09:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.212 09:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.212 09:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.470 09:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:16:25.470 09:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:16:26.037 09:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.037 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.037 09:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:26.037 09:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.037 09:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.037 09:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.037 09:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:26.037 09:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.037 09:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:26.037 09:00:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:26.297 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:16:26.297 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.297 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:26.297 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:26.297 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:26.297 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.297 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.297 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.297 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.297 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.297 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.297 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.297 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:26.582 00:16:26.582 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.582 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.582 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.840 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.840 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.840 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.840 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.841 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.841 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:26.841 { 00:16:26.841 "cntlid": 121, 00:16:26.841 "qid": 0, 00:16:26.841 "state": "enabled", 00:16:26.841 "thread": "nvmf_tgt_poll_group_000", 00:16:26.841 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:26.841 "listen_address": { 00:16:26.841 "trtype": "TCP", 00:16:26.841 "adrfam": "IPv4", 00:16:26.841 "traddr": "10.0.0.2", 00:16:26.841 "trsvcid": "4420" 00:16:26.841 }, 00:16:26.841 "peer_address": { 00:16:26.841 "trtype": "TCP", 00:16:26.841 "adrfam": "IPv4", 00:16:26.841 "traddr": "10.0.0.1", 00:16:26.841 "trsvcid": "40882" 00:16:26.841 }, 00:16:26.841 "auth": { 00:16:26.841 "state": "completed", 00:16:26.841 "digest": "sha512", 00:16:26.841 "dhgroup": "ffdhe4096" 00:16:26.841 } 00:16:26.841 } 00:16:26.841 ]' 00:16:26.841 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:26.841 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.841 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:26.841 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:26.841 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:26.841 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.841 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.841 09:00:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.099 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:16:27.099 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:16:27.667 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.667 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.667 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:27.667 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.667 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.667 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.667 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.667 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:27.667 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:27.926 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:16:27.926 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:27.926 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:27.926 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:27.926 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:27.926 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.926 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.926 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.926 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.926 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.926 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.926 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:27.926 09:00:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.185 00:16:28.185 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.185 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.185 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.443 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.443 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.443 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.443 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.443 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.443 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.443 { 00:16:28.443 "cntlid": 123, 00:16:28.443 "qid": 0, 00:16:28.443 "state": "enabled", 00:16:28.443 "thread": "nvmf_tgt_poll_group_000", 00:16:28.443 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:28.443 "listen_address": { 00:16:28.443 "trtype": "TCP", 00:16:28.443 "adrfam": "IPv4", 00:16:28.444 "traddr": "10.0.0.2", 00:16:28.444 "trsvcid": "4420" 00:16:28.444 }, 00:16:28.444 "peer_address": { 00:16:28.444 "trtype": "TCP", 00:16:28.444 "adrfam": "IPv4", 00:16:28.444 "traddr": "10.0.0.1", 00:16:28.444 "trsvcid": "56714" 00:16:28.444 }, 00:16:28.444 "auth": { 00:16:28.444 "state": "completed", 00:16:28.444 "digest": "sha512", 00:16:28.444 "dhgroup": "ffdhe4096" 00:16:28.444 } 00:16:28.444 } 00:16:28.444 ]' 00:16:28.444 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.444 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:28.444 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.444 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:28.444 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.444 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.444 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.444 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.703 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:16:28.703 09:00:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:16:29.269 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.269 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:29.269 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.269 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.269 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.269 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.269 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:29.269 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:29.528 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:16:29.528 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.528 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:29.528 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:29.528 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:29.528 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.528 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.528 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.528 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.528 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.528 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.528 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.528 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:29.787 00:16:29.787 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:29.787 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:29.787 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.045 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.045 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.045 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.045 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.045 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.045 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.045 { 00:16:30.045 "cntlid": 125, 00:16:30.045 "qid": 0, 00:16:30.045 "state": "enabled", 00:16:30.045 "thread": "nvmf_tgt_poll_group_000", 00:16:30.045 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:30.045 "listen_address": { 00:16:30.045 "trtype": "TCP", 00:16:30.045 "adrfam": "IPv4", 00:16:30.045 "traddr": "10.0.0.2", 00:16:30.045 "trsvcid": "4420" 00:16:30.045 }, 00:16:30.045 "peer_address": { 00:16:30.045 "trtype": "TCP", 00:16:30.045 "adrfam": "IPv4", 00:16:30.045 "traddr": "10.0.0.1", 00:16:30.045 "trsvcid": "56740" 00:16:30.045 }, 00:16:30.045 "auth": { 00:16:30.045 "state": "completed", 00:16:30.045 "digest": "sha512", 00:16:30.045 "dhgroup": "ffdhe4096" 00:16:30.045 } 00:16:30.045 } 00:16:30.045 ]' 00:16:30.045 09:00:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.045 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:30.045 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.045 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:30.045 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.304 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.304 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.304 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.304 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:16:30.304 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:16:30.871 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.871 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.871 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:30.871 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.871 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.871 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.871 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:30.871 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:30.871 09:00:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:31.130 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:16:31.130 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.130 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:31.130 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:31.130 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:31.130 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.130 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:31.130 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.130 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.130 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.130 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:31.130 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.130 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:31.389 00:16:31.389 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.389 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.389 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.647 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.647 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.647 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.647 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.647 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.647 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.648 { 00:16:31.648 "cntlid": 127, 00:16:31.648 "qid": 0, 00:16:31.648 "state": "enabled", 00:16:31.648 "thread": "nvmf_tgt_poll_group_000", 00:16:31.648 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:31.648 "listen_address": { 00:16:31.648 "trtype": "TCP", 00:16:31.648 "adrfam": "IPv4", 00:16:31.648 "traddr": "10.0.0.2", 00:16:31.648 "trsvcid": "4420" 00:16:31.648 }, 00:16:31.648 "peer_address": { 00:16:31.648 "trtype": "TCP", 00:16:31.648 "adrfam": "IPv4", 00:16:31.648 "traddr": "10.0.0.1", 00:16:31.648 "trsvcid": "56770" 00:16:31.648 }, 00:16:31.648 "auth": { 00:16:31.648 "state": "completed", 00:16:31.648 "digest": "sha512", 00:16:31.648 "dhgroup": "ffdhe4096" 00:16:31.648 } 00:16:31.648 } 00:16:31.648 ]' 00:16:31.648 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.648 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.648 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.906 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:31.906 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.906 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.906 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.906 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.165 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:16:32.165 09:00:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:16:32.732 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.732 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.732 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:32.732 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.732 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.732 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.732 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.732 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.732 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:32.732 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:32.732 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:16:32.732 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.732 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:32.732 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:32.732 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:32.732 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.732 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.732 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.732 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.732 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.732 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.732 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:32.732 09:00:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.301 00:16:33.301 09:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:33.301 09:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:33.301 09:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.301 09:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.301 09:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.301 09:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.301 09:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.301 09:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.301 09:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.301 { 00:16:33.301 "cntlid": 129, 00:16:33.301 "qid": 0, 00:16:33.301 "state": "enabled", 00:16:33.301 "thread": "nvmf_tgt_poll_group_000", 00:16:33.301 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:33.301 "listen_address": { 00:16:33.301 "trtype": "TCP", 00:16:33.301 "adrfam": "IPv4", 00:16:33.301 "traddr": "10.0.0.2", 00:16:33.301 "trsvcid": "4420" 00:16:33.301 }, 00:16:33.301 "peer_address": { 00:16:33.301 "trtype": "TCP", 00:16:33.301 "adrfam": "IPv4", 00:16:33.301 "traddr": "10.0.0.1", 00:16:33.301 "trsvcid": "56792" 00:16:33.301 }, 00:16:33.301 "auth": { 00:16:33.301 "state": "completed", 00:16:33.301 "digest": "sha512", 00:16:33.301 "dhgroup": "ffdhe6144" 00:16:33.301 } 00:16:33.301 } 00:16:33.301 ]' 00:16:33.301 09:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.560 09:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:33.560 09:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.560 09:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:33.560 09:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.560 09:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.560 09:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.560 09:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.819 09:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:16:33.819 09:00:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:16:34.386 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.386 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.386 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:34.386 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.386 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.387 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.387 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.387 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:34.387 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:34.387 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:16:34.387 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.387 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:34.387 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:34.387 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:34.387 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.387 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.387 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.387 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.646 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.646 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.646 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.646 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.904 00:16:34.904 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.904 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.904 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.163 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.163 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.163 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.163 09:00:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.163 09:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.163 09:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:35.163 { 00:16:35.163 "cntlid": 131, 00:16:35.163 "qid": 0, 00:16:35.163 "state": "enabled", 00:16:35.163 "thread": "nvmf_tgt_poll_group_000", 00:16:35.163 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:35.163 "listen_address": { 00:16:35.163 "trtype": "TCP", 00:16:35.163 "adrfam": "IPv4", 00:16:35.163 "traddr": "10.0.0.2", 00:16:35.163 "trsvcid": "4420" 00:16:35.163 }, 00:16:35.163 "peer_address": { 00:16:35.163 "trtype": "TCP", 00:16:35.163 "adrfam": "IPv4", 00:16:35.163 "traddr": "10.0.0.1", 00:16:35.163 "trsvcid": "56828" 00:16:35.163 }, 00:16:35.163 "auth": { 00:16:35.163 "state": "completed", 00:16:35.163 "digest": "sha512", 00:16:35.163 "dhgroup": "ffdhe6144" 00:16:35.163 } 00:16:35.163 } 00:16:35.163 ]' 00:16:35.163 09:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:35.163 09:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:35.163 09:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:35.163 09:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:35.163 09:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:35.163 09:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.163 09:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.163 09:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.421 09:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:16:35.421 09:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:16:35.988 09:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.988 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.988 09:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:35.988 09:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.988 09:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.988 09:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.988 09:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.988 09:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:35.988 09:00:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:36.246 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:16:36.246 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:36.246 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:36.246 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:36.246 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:36.246 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.246 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.246 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.246 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.246 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.246 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.246 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.246 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.504 00:16:36.504 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.504 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.504 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.763 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.763 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.763 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.763 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.763 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.763 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.763 { 00:16:36.763 "cntlid": 133, 00:16:36.763 "qid": 0, 00:16:36.763 "state": "enabled", 00:16:36.763 "thread": "nvmf_tgt_poll_group_000", 00:16:36.763 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:36.763 "listen_address": { 00:16:36.763 "trtype": "TCP", 00:16:36.763 "adrfam": "IPv4", 00:16:36.763 "traddr": "10.0.0.2", 00:16:36.763 "trsvcid": "4420" 00:16:36.763 }, 00:16:36.763 "peer_address": { 00:16:36.763 "trtype": "TCP", 00:16:36.763 "adrfam": "IPv4", 00:16:36.763 "traddr": "10.0.0.1", 00:16:36.763 "trsvcid": "56842" 00:16:36.763 }, 00:16:36.763 "auth": { 00:16:36.763 "state": "completed", 00:16:36.763 "digest": "sha512", 00:16:36.763 "dhgroup": "ffdhe6144" 00:16:36.763 } 00:16:36.763 } 00:16:36.763 ]' 00:16:36.763 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.763 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.763 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:37.021 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:37.021 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:37.021 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:37.021 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:37.021 09:00:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:37.021 09:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:16:37.021 09:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:16:37.587 09:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.845 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.845 09:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:37.845 09:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.846 09:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.846 09:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.846 09:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.846 09:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:37.846 09:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:37.846 09:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:16:37.846 09:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.846 09:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:37.846 09:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:37.846 09:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:37.846 09:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.846 09:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:37.846 09:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.846 09:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.846 09:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.846 09:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:37.846 09:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:37.846 09:00:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:38.412 00:16:38.412 09:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:38.412 09:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:38.412 09:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.412 09:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.412 09:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.412 09:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.412 09:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.412 09:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.412 09:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.412 { 00:16:38.412 "cntlid": 135, 00:16:38.412 "qid": 0, 00:16:38.412 "state": "enabled", 00:16:38.412 "thread": "nvmf_tgt_poll_group_000", 00:16:38.412 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:38.412 "listen_address": { 00:16:38.412 "trtype": "TCP", 00:16:38.412 "adrfam": "IPv4", 00:16:38.412 "traddr": "10.0.0.2", 00:16:38.412 "trsvcid": "4420" 00:16:38.412 }, 00:16:38.412 "peer_address": { 00:16:38.412 "trtype": "TCP", 00:16:38.412 "adrfam": "IPv4", 00:16:38.412 "traddr": "10.0.0.1", 00:16:38.412 "trsvcid": "41098" 00:16:38.412 }, 00:16:38.412 "auth": { 00:16:38.412 "state": "completed", 00:16:38.412 "digest": "sha512", 00:16:38.412 "dhgroup": "ffdhe6144" 00:16:38.412 } 00:16:38.412 } 00:16:38.412 ]' 00:16:38.413 09:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.672 09:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:38.672 09:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.672 09:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:38.672 09:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.672 09:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.672 09:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.672 09:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.930 09:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:16:38.930 09:00:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:16:39.497 09:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:39.497 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:39.497 09:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:39.497 09:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.497 09:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.497 09:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.497 09:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:39.497 09:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:39.497 09:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:39.497 09:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:39.756 09:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:16:39.756 09:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.756 09:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:39.756 09:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:39.756 09:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:39.756 09:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.756 09:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.756 09:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.756 09:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.756 09:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.756 09:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.756 09:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.756 09:00:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:40.014 00:16:40.273 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.273 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.273 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:40.273 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:40.273 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:40.273 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.273 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.273 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.273 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:40.273 { 00:16:40.273 "cntlid": 137, 00:16:40.273 "qid": 0, 00:16:40.273 "state": "enabled", 00:16:40.273 "thread": "nvmf_tgt_poll_group_000", 00:16:40.273 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:40.273 "listen_address": { 00:16:40.273 "trtype": "TCP", 00:16:40.273 "adrfam": "IPv4", 00:16:40.273 "traddr": "10.0.0.2", 00:16:40.273 "trsvcid": "4420" 00:16:40.273 }, 00:16:40.273 "peer_address": { 00:16:40.273 "trtype": "TCP", 00:16:40.273 "adrfam": "IPv4", 00:16:40.273 "traddr": "10.0.0.1", 00:16:40.273 "trsvcid": "41114" 00:16:40.273 }, 00:16:40.273 "auth": { 00:16:40.273 "state": "completed", 00:16:40.273 "digest": "sha512", 00:16:40.273 "dhgroup": "ffdhe8192" 00:16:40.273 } 00:16:40.273 } 00:16:40.273 ]' 00:16:40.273 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:40.531 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:40.531 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:40.531 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:40.531 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:40.531 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:40.531 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:40.531 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:40.789 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:16:40.789 09:00:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:16:41.357 09:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.357 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.357 09:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.357 09:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.357 09:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.357 09:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.357 09:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:41.357 09:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:41.357 09:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:41.615 09:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:16:41.615 09:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:41.615 09:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:41.615 09:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:41.615 09:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:41.615 09:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.615 09:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.615 09:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.615 09:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.615 09:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.615 09:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.615 09:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.615 09:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.873 00:16:42.131 09:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.131 09:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.131 09:00:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.131 09:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.131 09:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.131 09:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.131 09:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.131 09:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.132 09:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.132 { 00:16:42.132 "cntlid": 139, 00:16:42.132 "qid": 0, 00:16:42.132 "state": "enabled", 00:16:42.132 "thread": "nvmf_tgt_poll_group_000", 00:16:42.132 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:42.132 "listen_address": { 00:16:42.132 "trtype": "TCP", 00:16:42.132 "adrfam": "IPv4", 00:16:42.132 "traddr": "10.0.0.2", 00:16:42.132 "trsvcid": "4420" 00:16:42.132 }, 00:16:42.132 "peer_address": { 00:16:42.132 "trtype": "TCP", 00:16:42.132 "adrfam": "IPv4", 00:16:42.132 "traddr": "10.0.0.1", 00:16:42.132 "trsvcid": "41138" 00:16:42.132 }, 00:16:42.132 "auth": { 00:16:42.132 "state": "completed", 00:16:42.132 "digest": "sha512", 00:16:42.132 "dhgroup": "ffdhe8192" 00:16:42.132 } 00:16:42.132 } 00:16:42.132 ]' 00:16:42.132 09:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.132 09:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.132 09:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.390 09:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:42.390 09:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.390 09:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.390 09:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.390 09:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.649 09:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:16:42.649 09:00:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: --dhchap-ctrl-secret DHHC-1:02:YWZkNDgzZjI0MDBiYTI0ZWNhYTFkODVmNWQ1MzAyNzIwZTk3MTQxMTNhMzYxMmI18IyEgw==: 00:16:43.216 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.216 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.216 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:43.216 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.216 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.216 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.216 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.216 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:43.216 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:43.474 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:16:43.474 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:43.474 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:43.474 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:43.474 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:43.474 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.474 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.474 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.474 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.474 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.474 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.474 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.474 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.732 00:16:43.990 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:43.990 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:43.990 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:43.990 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:43.990 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:43.990 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.990 09:00:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.990 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.990 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:43.990 { 00:16:43.990 "cntlid": 141, 00:16:43.990 "qid": 0, 00:16:43.990 "state": "enabled", 00:16:43.990 "thread": "nvmf_tgt_poll_group_000", 00:16:43.990 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:43.990 "listen_address": { 00:16:43.990 "trtype": "TCP", 00:16:43.990 "adrfam": "IPv4", 00:16:43.990 "traddr": "10.0.0.2", 00:16:43.990 "trsvcid": "4420" 00:16:43.990 }, 00:16:43.990 "peer_address": { 00:16:43.990 "trtype": "TCP", 00:16:43.990 "adrfam": "IPv4", 00:16:43.990 "traddr": "10.0.0.1", 00:16:43.990 "trsvcid": "41156" 00:16:43.990 }, 00:16:43.990 "auth": { 00:16:43.990 "state": "completed", 00:16:43.990 "digest": "sha512", 00:16:43.990 "dhgroup": "ffdhe8192" 00:16:43.990 } 00:16:43.990 } 00:16:43.990 ]' 00:16:43.990 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.248 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:44.248 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.248 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:44.248 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.248 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.248 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.248 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.506 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:16:44.506 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:01:NjQxMGIxYjVkY2EzZDc4ZjRjY2FhNzA0M2U5MzI3NTYRVrgF: 00:16:45.073 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.073 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.073 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:45.073 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.073 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.074 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.074 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.074 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:45.074 09:01:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:45.332 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:16:45.332 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.332 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:45.332 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:45.332 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:45.332 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.332 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:45.332 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.332 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.332 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.332 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:45.332 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.332 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:45.590 00:16:45.590 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.590 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.590 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.849 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.849 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.849 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.849 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.849 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.849 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.849 { 00:16:45.849 "cntlid": 143, 00:16:45.849 "qid": 0, 00:16:45.849 "state": "enabled", 00:16:45.849 "thread": "nvmf_tgt_poll_group_000", 00:16:45.849 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:45.849 "listen_address": { 00:16:45.849 "trtype": "TCP", 00:16:45.849 "adrfam": "IPv4", 00:16:45.849 "traddr": "10.0.0.2", 00:16:45.849 "trsvcid": "4420" 00:16:45.849 }, 00:16:45.849 "peer_address": { 00:16:45.849 "trtype": "TCP", 00:16:45.849 "adrfam": "IPv4", 00:16:45.849 "traddr": "10.0.0.1", 00:16:45.849 "trsvcid": "41178" 00:16:45.849 }, 00:16:45.849 "auth": { 00:16:45.849 "state": "completed", 00:16:45.849 "digest": "sha512", 00:16:45.849 "dhgroup": "ffdhe8192" 00:16:45.849 } 00:16:45.849 } 00:16:45.849 ]' 00:16:45.849 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:45.849 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:45.849 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.107 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:46.107 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.107 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.107 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.107 09:01:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.107 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:16:46.107 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:16:46.674 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.933 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.933 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:46.933 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.933 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.933 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.933 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:46.933 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:16:46.933 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:16:46.933 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:46.933 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:46.933 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:46.933 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:16:46.933 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.933 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:46.933 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:46.933 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:46.933 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.933 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.933 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.933 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.933 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.933 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.933 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:46.933 09:01:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:47.501 00:16:47.501 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.501 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.501 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.759 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.760 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.760 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.760 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.760 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.760 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.760 { 00:16:47.760 "cntlid": 145, 00:16:47.760 "qid": 0, 00:16:47.760 "state": "enabled", 00:16:47.760 "thread": "nvmf_tgt_poll_group_000", 00:16:47.760 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:47.760 "listen_address": { 00:16:47.760 "trtype": "TCP", 00:16:47.760 "adrfam": "IPv4", 00:16:47.760 "traddr": "10.0.0.2", 00:16:47.760 "trsvcid": "4420" 00:16:47.760 }, 00:16:47.760 "peer_address": { 00:16:47.760 "trtype": "TCP", 00:16:47.760 "adrfam": "IPv4", 00:16:47.760 "traddr": "10.0.0.1", 00:16:47.760 "trsvcid": "41198" 00:16:47.760 }, 00:16:47.760 "auth": { 00:16:47.760 "state": "completed", 00:16:47.760 "digest": "sha512", 00:16:47.760 "dhgroup": "ffdhe8192" 00:16:47.760 } 00:16:47.760 } 00:16:47.760 ]' 00:16:47.760 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.760 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.760 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.760 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:47.760 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.760 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.760 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.760 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.019 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:16:48.019 09:01:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:ZWQyMjk0NDcwNzFkODU5ODI3YzhhN2Y0NTVmMGQ1NGUwOGE1MWMxY2NmMWZiYzFkdPVtog==: --dhchap-ctrl-secret DHHC-1:03:MmU2NjI3MjIyMjNjMWFmY2NlYWIzMTc2NzVkZTAxNDI1NmM0MmQ5MGNmZmM5MGRjODRiYzkyY2UwMTMyNTg1M1yab4g=: 00:16:48.588 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.588 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.588 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:48.588 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.588 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.588 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.588 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:48.588 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.588 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.588 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.588 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:16:48.588 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:48.588 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:16:48.588 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:48.588 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:48.588 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:48.588 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:48.588 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:16:48.588 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:48.588 09:01:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:16:49.156 request: 00:16:49.156 { 00:16:49.156 "name": "nvme0", 00:16:49.156 "trtype": "tcp", 00:16:49.156 "traddr": "10.0.0.2", 00:16:49.156 "adrfam": "ipv4", 00:16:49.156 "trsvcid": "4420", 00:16:49.156 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:49.156 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:49.156 "prchk_reftag": false, 00:16:49.156 "prchk_guard": false, 00:16:49.156 "hdgst": false, 00:16:49.156 "ddgst": false, 00:16:49.156 "dhchap_key": "key2", 00:16:49.156 "allow_unrecognized_csi": false, 00:16:49.156 "method": "bdev_nvme_attach_controller", 00:16:49.156 "req_id": 1 00:16:49.156 } 00:16:49.156 Got JSON-RPC error response 00:16:49.156 response: 00:16:49.156 { 00:16:49.156 "code": -5, 00:16:49.156 "message": "Input/output error" 00:16:49.156 } 00:16:49.156 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:49.156 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:49.156 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:49.156 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:49.156 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.156 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.156 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.156 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.156 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.156 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.156 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.156 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.156 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:49.156 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:49.156 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:49.156 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:49.156 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:49.156 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:49.156 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:49.156 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:49.156 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:49.156 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:49.724 request: 00:16:49.724 { 00:16:49.724 "name": "nvme0", 00:16:49.724 "trtype": "tcp", 00:16:49.724 "traddr": "10.0.0.2", 00:16:49.724 "adrfam": "ipv4", 00:16:49.724 "trsvcid": "4420", 00:16:49.724 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:49.724 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:49.724 "prchk_reftag": false, 00:16:49.724 "prchk_guard": false, 00:16:49.724 "hdgst": false, 00:16:49.724 "ddgst": false, 00:16:49.724 "dhchap_key": "key1", 00:16:49.724 "dhchap_ctrlr_key": "ckey2", 00:16:49.724 "allow_unrecognized_csi": false, 00:16:49.724 "method": "bdev_nvme_attach_controller", 00:16:49.724 "req_id": 1 00:16:49.724 } 00:16:49.724 Got JSON-RPC error response 00:16:49.724 response: 00:16:49.724 { 00:16:49.724 "code": -5, 00:16:49.724 "message": "Input/output error" 00:16:49.724 } 00:16:49.724 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:49.724 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:49.724 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:49.724 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:49.724 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.724 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.724 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.724 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.724 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:49.724 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.724 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.724 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.724 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.724 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:49.724 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.724 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:49.724 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:49.724 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:49.724 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:49.724 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.724 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.724 09:01:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:49.983 request: 00:16:49.983 { 00:16:49.983 "name": "nvme0", 00:16:49.983 "trtype": "tcp", 00:16:49.983 "traddr": "10.0.0.2", 00:16:49.983 "adrfam": "ipv4", 00:16:49.983 "trsvcid": "4420", 00:16:49.983 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:49.983 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:49.983 "prchk_reftag": false, 00:16:49.983 "prchk_guard": false, 00:16:49.983 "hdgst": false, 00:16:49.983 "ddgst": false, 00:16:49.983 "dhchap_key": "key1", 00:16:49.983 "dhchap_ctrlr_key": "ckey1", 00:16:49.983 "allow_unrecognized_csi": false, 00:16:49.983 "method": "bdev_nvme_attach_controller", 00:16:49.983 "req_id": 1 00:16:49.983 } 00:16:49.983 Got JSON-RPC error response 00:16:49.983 response: 00:16:49.983 { 00:16:49.983 "code": -5, 00:16:49.983 "message": "Input/output error" 00:16:49.983 } 00:16:49.983 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:49.983 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:49.983 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:49.983 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:49.983 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.983 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.983 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.983 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.983 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 2315021 00:16:49.983 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2315021 ']' 00:16:49.983 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2315021 00:16:49.983 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:16:50.243 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:50.243 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2315021 00:16:50.243 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:50.243 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:50.243 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2315021' 00:16:50.243 killing process with pid 2315021 00:16:50.243 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2315021 00:16:50.243 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2315021 00:16:50.243 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:50.243 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:16:50.243 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:16:50.243 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.243 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # nvmfpid=2337263 00:16:50.243 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:50.243 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # waitforlisten 2337263 00:16:50.243 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2337263 ']' 00:16:50.243 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.243 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:50.243 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.243 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:50.243 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.502 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:50.502 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:50.502 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:16:50.502 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:16:50.502 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.502 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:50.502 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:50.502 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 2337263 00:16:50.502 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 2337263 ']' 00:16:50.502 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:50.502 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:50.502 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:50.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:50.502 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:50.502 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.761 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:50.761 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:16:50.761 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:16:50.761 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.761 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.761 null0 00:16:51.021 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.021 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:51.021 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.oyo 00:16:51.021 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.021 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.021 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.021 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.OXZ ]] 00:16:51.021 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.OXZ 00:16:51.021 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.021 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.021 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.021 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:51.021 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.kAh 00:16:51.021 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.021 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.021 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.021 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.z3e ]] 00:16:51.021 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.z3e 00:16:51.021 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.021 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.021 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.021 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:51.021 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.ybR 00:16:51.021 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.021 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.021 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.022 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.l9v ]] 00:16:51.022 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.l9v 00:16:51.022 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.022 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.022 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.022 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:16:51.022 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.VWN 00:16:51.022 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.022 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.022 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.022 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:16:51.022 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:16:51.022 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.022 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:16:51.022 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:51.022 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:51.022 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.022 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:51.022 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.022 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.022 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.022 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:51.022 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.022 09:01:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:51.588 nvme0n1 00:16:51.846 09:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:51.846 09:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:51.846 09:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.846 09:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.846 09:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.846 09:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.846 09:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.846 09:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.847 09:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:51.847 { 00:16:51.847 "cntlid": 1, 00:16:51.847 "qid": 0, 00:16:51.847 "state": "enabled", 00:16:51.847 "thread": "nvmf_tgt_poll_group_000", 00:16:51.847 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:51.847 "listen_address": { 00:16:51.847 "trtype": "TCP", 00:16:51.847 "adrfam": "IPv4", 00:16:51.847 "traddr": "10.0.0.2", 00:16:51.847 "trsvcid": "4420" 00:16:51.847 }, 00:16:51.847 "peer_address": { 00:16:51.847 "trtype": "TCP", 00:16:51.847 "adrfam": "IPv4", 00:16:51.847 "traddr": "10.0.0.1", 00:16:51.847 "trsvcid": "48934" 00:16:51.847 }, 00:16:51.847 "auth": { 00:16:51.847 "state": "completed", 00:16:51.847 "digest": "sha512", 00:16:51.847 "dhgroup": "ffdhe8192" 00:16:51.847 } 00:16:51.847 } 00:16:51.847 ]' 00:16:51.847 09:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.105 09:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:52.105 09:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.105 09:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:52.105 09:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.105 09:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.105 09:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.105 09:01:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.364 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:16:52.364 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:16:52.931 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.932 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.932 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:52.932 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.932 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.932 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.932 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:52.932 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.932 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.932 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.932 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:52.932 09:01:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:53.213 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:53.213 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:53.213 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:53.213 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:53.214 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.214 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:53.214 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.214 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:53.214 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.214 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.214 request: 00:16:53.214 { 00:16:53.214 "name": "nvme0", 00:16:53.214 "trtype": "tcp", 00:16:53.214 "traddr": "10.0.0.2", 00:16:53.214 "adrfam": "ipv4", 00:16:53.214 "trsvcid": "4420", 00:16:53.214 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:53.214 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:53.214 "prchk_reftag": false, 00:16:53.214 "prchk_guard": false, 00:16:53.214 "hdgst": false, 00:16:53.214 "ddgst": false, 00:16:53.214 "dhchap_key": "key3", 00:16:53.214 "allow_unrecognized_csi": false, 00:16:53.214 "method": "bdev_nvme_attach_controller", 00:16:53.214 "req_id": 1 00:16:53.214 } 00:16:53.214 Got JSON-RPC error response 00:16:53.214 response: 00:16:53.214 { 00:16:53.214 "code": -5, 00:16:53.214 "message": "Input/output error" 00:16:53.214 } 00:16:53.214 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:53.214 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:53.214 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:53.214 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:53.214 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:16:53.214 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:16:53.214 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:53.214 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:53.528 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:16:53.529 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:53.529 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:16:53.529 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:53.529 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.529 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:53.529 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:53.529 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:53.529 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.529 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.787 request: 00:16:53.787 { 00:16:53.787 "name": "nvme0", 00:16:53.787 "trtype": "tcp", 00:16:53.787 "traddr": "10.0.0.2", 00:16:53.787 "adrfam": "ipv4", 00:16:53.787 "trsvcid": "4420", 00:16:53.787 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:53.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:53.787 "prchk_reftag": false, 00:16:53.787 "prchk_guard": false, 00:16:53.787 "hdgst": false, 00:16:53.787 "ddgst": false, 00:16:53.787 "dhchap_key": "key3", 00:16:53.787 "allow_unrecognized_csi": false, 00:16:53.787 "method": "bdev_nvme_attach_controller", 00:16:53.787 "req_id": 1 00:16:53.787 } 00:16:53.787 Got JSON-RPC error response 00:16:53.787 response: 00:16:53.787 { 00:16:53.787 "code": -5, 00:16:53.787 "message": "Input/output error" 00:16:53.787 } 00:16:53.788 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:53.788 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:53.788 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:53.788 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:53.788 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:53.788 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:16:53.788 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:16:53.788 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:53.788 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:53.788 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:54.046 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:54.046 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.046 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.046 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.046 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:54.046 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.046 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.046 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.046 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:54.046 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:54.046 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:54.046 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:54.046 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:54.046 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:54.046 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:54.046 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:54.046 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:54.046 09:01:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:16:54.306 request: 00:16:54.306 { 00:16:54.306 "name": "nvme0", 00:16:54.306 "trtype": "tcp", 00:16:54.306 "traddr": "10.0.0.2", 00:16:54.306 "adrfam": "ipv4", 00:16:54.306 "trsvcid": "4420", 00:16:54.306 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:54.306 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:54.306 "prchk_reftag": false, 00:16:54.306 "prchk_guard": false, 00:16:54.306 "hdgst": false, 00:16:54.306 "ddgst": false, 00:16:54.306 "dhchap_key": "key0", 00:16:54.306 "dhchap_ctrlr_key": "key1", 00:16:54.306 "allow_unrecognized_csi": false, 00:16:54.306 "method": "bdev_nvme_attach_controller", 00:16:54.306 "req_id": 1 00:16:54.306 } 00:16:54.306 Got JSON-RPC error response 00:16:54.306 response: 00:16:54.306 { 00:16:54.306 "code": -5, 00:16:54.306 "message": "Input/output error" 00:16:54.306 } 00:16:54.306 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:54.306 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:54.306 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:54.306 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:54.306 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:16:54.306 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:54.306 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:16:54.564 nvme0n1 00:16:54.564 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:16:54.564 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.564 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:16:54.823 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.823 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.823 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:55.082 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:16:55.082 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.082 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.082 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.082 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:55.082 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:55.082 09:01:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:55.649 nvme0n1 00:16:55.649 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:16:55.649 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:16:55.649 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:55.908 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:55.908 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:55.908 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.908 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.908 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.908 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:16:55.908 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:16:55.908 09:01:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.167 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.167 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:16:56.167 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: --dhchap-ctrl-secret DHHC-1:03:YmVkZjY1NTA0YmZkYThlZDFmZDIxY2E2OWZhZDAxODE0OGY3ZjNmNTExNTBlNGFjNzVlZTNjMDk0ZTkyNmQ5OCF5rnM=: 00:16:56.736 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:16:56.736 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:16:56.736 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:16:56.736 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:16:56.736 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:16:56.736 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:16:56.736 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:16:56.736 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.736 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.995 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:16:56.995 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:16:56.995 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:16:56.995 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:16:56.995 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:56.995 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:16:56.995 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:56.995 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:16:56.995 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:56.995 09:01:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:16:57.560 request: 00:16:57.560 { 00:16:57.560 "name": "nvme0", 00:16:57.560 "trtype": "tcp", 00:16:57.560 "traddr": "10.0.0.2", 00:16:57.560 "adrfam": "ipv4", 00:16:57.560 "trsvcid": "4420", 00:16:57.560 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:57.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:16:57.560 "prchk_reftag": false, 00:16:57.560 "prchk_guard": false, 00:16:57.560 "hdgst": false, 00:16:57.560 "ddgst": false, 00:16:57.560 "dhchap_key": "key1", 00:16:57.560 "allow_unrecognized_csi": false, 00:16:57.560 "method": "bdev_nvme_attach_controller", 00:16:57.560 "req_id": 1 00:16:57.560 } 00:16:57.560 Got JSON-RPC error response 00:16:57.560 response: 00:16:57.560 { 00:16:57.560 "code": -5, 00:16:57.560 "message": "Input/output error" 00:16:57.560 } 00:16:57.560 09:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:16:57.560 09:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:57.560 09:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:57.560 09:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:57.560 09:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:57.560 09:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:57.560 09:01:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:16:58.126 nvme0n1 00:16:58.127 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:16:58.127 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:16:58.127 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:58.384 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:58.384 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.385 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.642 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:58.642 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.642 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.642 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.642 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:16:58.642 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:58.643 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:16:58.900 nvme0n1 00:16:58.901 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:16:58.901 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:16:58.901 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.159 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.159 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.159 09:01:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.159 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:16:59.159 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.159 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.159 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.159 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: '' 2s 00:16:59.159 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:16:59.159 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:16:59.159 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: 00:16:59.159 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:16:59.159 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:16:59.159 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:16:59.159 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: ]] 00:16:59.159 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NTAzOTQxYmVhMjczMDRhMWEwMGJhMWZmZGEzZDA1YTa+Sgi2: 00:16:59.159 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:16:59.159 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:16:59.159 09:01:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:01.687 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:01.687 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:01.687 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:01.687 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:01.687 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:01.687 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:01.687 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:01.687 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:01.687 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.687 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.687 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.687 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: 2s 00:17:01.687 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:01.687 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:01.687 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:01.687 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: 00:17:01.687 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:01.687 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:01.687 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:01.687 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: ]] 00:17:01.687 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MjhmNjY2YmJkZjQ3MjU0ZjliYzQ5OTAzODVmYmU4ZTM2MTBkMzE1OTI5NmQzNmVkhgjQSg==: 00:17:01.687 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:01.687 09:01:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:03.589 09:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:03.589 09:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:03.589 09:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:03.589 09:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:03.589 09:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:03.589 09:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:03.589 09:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:03.589 09:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.589 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.589 09:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:03.589 09:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.589 09:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.589 09:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.589 09:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:03.589 09:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:03.589 09:01:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:04.156 nvme0n1 00:17:04.156 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:04.156 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.156 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.156 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.156 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:04.156 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:04.724 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:04.724 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:04.724 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.982 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.982 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:04.982 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.982 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.982 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.982 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:04.982 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:04.982 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:04.982 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.982 09:01:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:05.241 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.241 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:05.241 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.241 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.241 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.241 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:05.241 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:05.241 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:05.241 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:05.241 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:05.241 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:05.241 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:05.241 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:05.242 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:05.809 request: 00:17:05.809 { 00:17:05.809 "name": "nvme0", 00:17:05.809 "dhchap_key": "key1", 00:17:05.809 "dhchap_ctrlr_key": "key3", 00:17:05.809 "method": "bdev_nvme_set_keys", 00:17:05.809 "req_id": 1 00:17:05.809 } 00:17:05.809 Got JSON-RPC error response 00:17:05.809 response: 00:17:05.809 { 00:17:05.809 "code": -13, 00:17:05.809 "message": "Permission denied" 00:17:05.809 } 00:17:05.809 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:05.809 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:05.809 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:05.809 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:05.809 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:05.809 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:05.809 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:06.068 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:17:06.068 09:01:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:17:07.005 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:07.005 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:07.005 09:01:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.263 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:17:07.263 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:07.263 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.263 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.263 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.263 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:07.263 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:07.264 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:07.831 nvme0n1 00:17:07.831 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:07.831 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.831 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.831 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.831 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:07.831 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:07.831 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:07.831 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:07.831 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.831 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:07.831 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:07.831 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:07.831 09:01:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:17:08.398 request: 00:17:08.398 { 00:17:08.398 "name": "nvme0", 00:17:08.398 "dhchap_key": "key2", 00:17:08.398 "dhchap_ctrlr_key": "key0", 00:17:08.398 "method": "bdev_nvme_set_keys", 00:17:08.398 "req_id": 1 00:17:08.398 } 00:17:08.398 Got JSON-RPC error response 00:17:08.398 response: 00:17:08.398 { 00:17:08.398 "code": -13, 00:17:08.398 "message": "Permission denied" 00:17:08.398 } 00:17:08.398 09:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:08.398 09:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:08.398 09:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:08.398 09:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:08.398 09:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:08.398 09:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:08.398 09:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.658 09:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:17:08.658 09:01:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:17:09.593 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:17:09.593 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:17:09.593 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.852 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:17:09.852 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:17:09.852 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:17:09.852 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2315046 00:17:09.852 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2315046 ']' 00:17:09.852 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2315046 00:17:09.852 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:09.852 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:09.852 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2315046 00:17:09.852 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:09.852 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:09.852 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2315046' 00:17:09.852 killing process with pid 2315046 00:17:09.852 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2315046 00:17:09.852 09:01:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2315046 00:17:10.110 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:10.110 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:17:10.110 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@99 -- # sync 00:17:10.110 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:17:10.110 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # set +e 00:17:10.110 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:17:10.110 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:17:10.110 rmmod nvme_tcp 00:17:10.110 rmmod nvme_fabrics 00:17:10.110 rmmod nvme_keyring 00:17:10.369 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:17:10.369 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # set -e 00:17:10.369 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # return 0 00:17:10.369 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # '[' -n 2337263 ']' 00:17:10.369 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@337 -- # killprocess 2337263 00:17:10.369 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 2337263 ']' 00:17:10.369 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 2337263 00:17:10.369 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:10.369 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:10.369 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2337263 00:17:10.369 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:10.370 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:10.370 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2337263' 00:17:10.370 killing process with pid 2337263 00:17:10.370 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 2337263 00:17:10.370 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 2337263 00:17:10.370 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:17:10.370 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # nvmf_fini 00:17:10.370 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@264 -- # local dev 00:17:10.370 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@267 -- # remove_target_ns 00:17:10.370 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:10.370 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:10.370 09:01:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@268 -- # delete_main_bridge 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@130 -- # return 0 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@41 -- # _dev=0 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@41 -- # dev_map=() 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@284 -- # iptr 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@542 -- # iptables-save 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@542 -- # iptables-restore 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.oyo /tmp/spdk.key-sha256.kAh /tmp/spdk.key-sha384.ybR /tmp/spdk.key-sha512.VWN /tmp/spdk.key-sha512.OXZ /tmp/spdk.key-sha384.z3e /tmp/spdk.key-sha256.l9v '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:12.906 00:17:12.906 real 2m34.138s 00:17:12.906 user 5m55.308s 00:17:12.906 sys 0m24.441s 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.906 ************************************ 00:17:12.906 END TEST nvmf_auth_target 00:17:12.906 ************************************ 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:12.906 ************************************ 00:17:12.906 START TEST nvmf_bdevio_no_huge 00:17:12.906 ************************************ 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:12.906 * Looking for test storage... 00:17:12.906 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:12.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.906 --rc genhtml_branch_coverage=1 00:17:12.906 --rc genhtml_function_coverage=1 00:17:12.906 --rc genhtml_legend=1 00:17:12.906 --rc geninfo_all_blocks=1 00:17:12.906 --rc geninfo_unexecuted_blocks=1 00:17:12.906 00:17:12.906 ' 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:12.906 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.906 --rc genhtml_branch_coverage=1 00:17:12.906 --rc genhtml_function_coverage=1 00:17:12.906 --rc genhtml_legend=1 00:17:12.906 --rc geninfo_all_blocks=1 00:17:12.906 --rc geninfo_unexecuted_blocks=1 00:17:12.906 00:17:12.906 ' 00:17:12.906 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:12.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.907 --rc genhtml_branch_coverage=1 00:17:12.907 --rc genhtml_function_coverage=1 00:17:12.907 --rc genhtml_legend=1 00:17:12.907 --rc geninfo_all_blocks=1 00:17:12.907 --rc geninfo_unexecuted_blocks=1 00:17:12.907 00:17:12.907 ' 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:12.907 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:12.907 --rc genhtml_branch_coverage=1 00:17:12.907 --rc genhtml_function_coverage=1 00:17:12.907 --rc genhtml_legend=1 00:17:12.907 --rc geninfo_all_blocks=1 00:17:12.907 --rc geninfo_unexecuted_blocks=1 00:17:12.907 00:17:12.907 ' 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@50 -- # : 0 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:12.907 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # prepare_net_devs 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # local -g is_hw=no 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # remove_target_ns 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # xtrace_disable 00:17:12.907 09:01:28 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:19.479 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:19.479 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@131 -- # pci_devs=() 00:17:19.479 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@131 -- # local -a pci_devs 00:17:19.479 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@132 -- # pci_net_devs=() 00:17:19.479 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:17:19.479 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@133 -- # pci_drivers=() 00:17:19.479 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@133 -- # local -A pci_drivers 00:17:19.479 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@135 -- # net_devs=() 00:17:19.479 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@135 -- # local -ga net_devs 00:17:19.479 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@136 -- # e810=() 00:17:19.479 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@136 -- # local -ga e810 00:17:19.479 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@137 -- # x722=() 00:17:19.479 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@137 -- # local -ga x722 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@138 -- # mlx=() 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@138 -- # local -ga mlx 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:19.480 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:19.480 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # [[ up == up ]] 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:19.480 Found net devices under 0000:86:00.0: cvl_0_0 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # [[ up == up ]] 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:19.480 Found net devices under 0000:86:00.1: cvl_0_1 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # is_hw=yes 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@257 -- # create_target_ns 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@27 -- # local -gA dev_map 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@28 -- # local -g _dev 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # ips=() 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:17:19.480 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772161 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:17:19.481 10.0.0.1 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772162 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:17:19.481 10.0.0.2 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@38 -- # ping_ips 1 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=initiator0 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:17:19.481 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:19.481 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.420 ms 00:17:19.481 00:17:19.481 --- 10.0.0.1 ping statistics --- 00:17:19.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.481 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=target0 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:17:19.481 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:19.481 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:17:19.481 00:17:19.481 --- 10.0.0.2 ping statistics --- 00:17:19.481 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:19.481 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # (( pair++ )) 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # return 0 00:17:19.481 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=initiator0 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=initiator1 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # return 1 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev= 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@169 -- # return 0 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=target0 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev target1 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=target1 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # return 1 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev= 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@169 -- # return 0 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # nvmfpid=2344180 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # waitforlisten 2344180 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 2344180 ']' 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:19.482 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:19.482 09:01:34 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:19.482 [2024-11-20 09:01:34.856672] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:17:19.482 [2024-11-20 09:01:34.856725] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:19.482 [2024-11-20 09:01:34.942714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:19.482 [2024-11-20 09:01:34.988822] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:19.482 [2024-11-20 09:01:34.988855] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:19.482 [2024-11-20 09:01:34.988862] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:19.482 [2024-11-20 09:01:34.988868] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:19.482 [2024-11-20 09:01:34.988873] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:19.482 [2024-11-20 09:01:34.990132] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:17:19.482 [2024-11-20 09:01:34.990240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:17:19.482 [2024-11-20 09:01:34.990348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:17:19.482 [2024-11-20 09:01:34.990348] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:17:19.741 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:19.741 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:17:19.741 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:17:19.741 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:19.741 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:19.741 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:19.741 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:19.741 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.741 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:19.741 [2024-11-20 09:01:35.749365] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:19.741 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.741 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:19.741 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.741 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:19.741 Malloc0 00:17:19.741 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.741 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:19.741 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.741 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:19.999 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.999 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:19.999 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.999 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:19.999 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.999 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:19.999 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.999 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:19.999 [2024-11-20 09:01:35.793665] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:19.999 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.999 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:19.999 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:19.999 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # config=() 00:17:19.999 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # local subsystem config 00:17:19.999 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:17:19.999 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:17:19.999 { 00:17:19.999 "params": { 00:17:19.999 "name": "Nvme$subsystem", 00:17:19.999 "trtype": "$TEST_TRANSPORT", 00:17:19.999 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:19.999 "adrfam": "ipv4", 00:17:19.999 "trsvcid": "$NVMF_PORT", 00:17:19.999 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:19.999 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:19.999 "hdgst": ${hdgst:-false}, 00:17:19.999 "ddgst": ${ddgst:-false} 00:17:19.999 }, 00:17:19.999 "method": "bdev_nvme_attach_controller" 00:17:19.999 } 00:17:19.999 EOF 00:17:19.999 )") 00:17:19.999 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # cat 00:17:19.999 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # jq . 00:17:19.999 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@397 -- # IFS=, 00:17:19.999 09:01:35 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:17:19.999 "params": { 00:17:19.999 "name": "Nvme1", 00:17:19.999 "trtype": "tcp", 00:17:19.999 "traddr": "10.0.0.2", 00:17:19.999 "adrfam": "ipv4", 00:17:19.999 "trsvcid": "4420", 00:17:19.999 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:19.999 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:19.999 "hdgst": false, 00:17:19.999 "ddgst": false 00:17:19.999 }, 00:17:19.999 "method": "bdev_nvme_attach_controller" 00:17:19.999 }' 00:17:19.999 [2024-11-20 09:01:35.841777] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:17:19.999 [2024-11-20 09:01:35.841826] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2344423 ] 00:17:19.999 [2024-11-20 09:01:35.922445] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:19.999 [2024-11-20 09:01:35.971676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:19.999 [2024-11-20 09:01:35.971783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:17:19.999 [2024-11-20 09:01:35.971784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:20.257 I/O targets: 00:17:20.257 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:20.257 00:17:20.257 00:17:20.257 CUnit - A unit testing framework for C - Version 2.1-3 00:17:20.257 http://cunit.sourceforge.net/ 00:17:20.257 00:17:20.257 00:17:20.257 Suite: bdevio tests on: Nvme1n1 00:17:20.516 Test: blockdev write read block ...passed 00:17:20.516 Test: blockdev write zeroes read block ...passed 00:17:20.516 Test: blockdev write zeroes read no split ...passed 00:17:20.516 Test: blockdev write zeroes read split ...passed 00:17:20.516 Test: blockdev write zeroes read split partial ...passed 00:17:20.516 Test: blockdev reset ...[2024-11-20 09:01:36.386585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:17:20.516 [2024-11-20 09:01:36.386649] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x908920 (9): Bad file descriptor 00:17:20.516 [2024-11-20 09:01:36.522125] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:17:20.516 passed 00:17:20.774 Test: blockdev write read 8 blocks ...passed 00:17:20.774 Test: blockdev write read size > 128k ...passed 00:17:20.774 Test: blockdev write read invalid size ...passed 00:17:20.774 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:20.774 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:20.774 Test: blockdev write read max offset ...passed 00:17:20.774 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:20.774 Test: blockdev writev readv 8 blocks ...passed 00:17:20.774 Test: blockdev writev readv 30 x 1block ...passed 00:17:20.774 Test: blockdev writev readv block ...passed 00:17:20.774 Test: blockdev writev readv size > 128k ...passed 00:17:20.774 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:20.774 Test: blockdev comparev and writev ...[2024-11-20 09:01:36.773723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:20.774 [2024-11-20 09:01:36.773750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:20.774 [2024-11-20 09:01:36.773765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:20.774 [2024-11-20 09:01:36.773773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:20.774 [2024-11-20 09:01:36.774011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:20.774 [2024-11-20 09:01:36.774022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:20.774 [2024-11-20 09:01:36.774034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:20.774 [2024-11-20 09:01:36.774057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:20.774 [2024-11-20 09:01:36.774286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:20.774 [2024-11-20 09:01:36.774296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:20.774 [2024-11-20 09:01:36.774308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:20.774 [2024-11-20 09:01:36.774315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:20.774 [2024-11-20 09:01:36.774550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:20.774 [2024-11-20 09:01:36.774560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:20.774 [2024-11-20 09:01:36.774571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:20.774 [2024-11-20 09:01:36.774578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:21.032 passed 00:17:21.032 Test: blockdev nvme passthru rw ...passed 00:17:21.032 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:01:36.856386] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:21.032 [2024-11-20 09:01:36.856403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:21.032 [2024-11-20 09:01:36.856514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:21.032 [2024-11-20 09:01:36.856524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:21.032 [2024-11-20 09:01:36.856627] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:21.032 [2024-11-20 09:01:36.856637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:21.032 [2024-11-20 09:01:36.856744] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:21.032 [2024-11-20 09:01:36.856754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:21.032 passed 00:17:21.032 Test: blockdev nvme admin passthru ...passed 00:17:21.032 Test: blockdev copy ...passed 00:17:21.032 00:17:21.032 Run Summary: Type Total Ran Passed Failed Inactive 00:17:21.032 suites 1 1 n/a 0 0 00:17:21.032 tests 23 23 23 0 0 00:17:21.032 asserts 152 152 152 0 n/a 00:17:21.032 00:17:21.032 Elapsed time = 1.299 seconds 00:17:21.290 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:21.290 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.290 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:21.290 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.290 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:21.290 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:21.290 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # nvmfcleanup 00:17:21.290 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@99 -- # sync 00:17:21.290 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:17:21.290 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@102 -- # set +e 00:17:21.290 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@103 -- # for i in {1..20} 00:17:21.290 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:17:21.290 rmmod nvme_tcp 00:17:21.290 rmmod nvme_fabrics 00:17:21.290 rmmod nvme_keyring 00:17:21.290 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:17:21.290 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@106 -- # set -e 00:17:21.290 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@107 -- # return 0 00:17:21.290 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # '[' -n 2344180 ']' 00:17:21.290 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@337 -- # killprocess 2344180 00:17:21.290 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 2344180 ']' 00:17:21.290 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 2344180 00:17:21.290 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:17:21.290 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:21.290 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2344180 00:17:21.291 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:17:21.291 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:17:21.291 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2344180' 00:17:21.291 killing process with pid 2344180 00:17:21.291 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 2344180 00:17:21.291 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 2344180 00:17:21.855 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:17:21.855 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # nvmf_fini 00:17:21.855 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@264 -- # local dev 00:17:21.855 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@267 -- # remove_target_ns 00:17:21.855 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:21.855 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:21.855 09:01:37 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:23.760 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@268 -- # delete_main_bridge 00:17:23.760 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:17:23.760 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@130 -- # return 0 00:17:23.760 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:17:23.760 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:17:23.760 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:17:23.760 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:17:23.760 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:17:23.760 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:17:23.761 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:17:23.761 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:17:23.761 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:17:23.761 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:17:23.761 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:17:23.761 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:17:23.761 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:17:23.761 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:17:23.761 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:17:23.761 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:17:23.761 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:17:23.761 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@41 -- # _dev=0 00:17:23.761 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@41 -- # dev_map=() 00:17:23.761 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@284 -- # iptr 00:17:23.761 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@542 -- # iptables-save 00:17:23.761 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:17:23.761 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@542 -- # iptables-restore 00:17:23.761 00:17:23.761 real 0m11.170s 00:17:23.761 user 0m14.655s 00:17:23.761 sys 0m5.533s 00:17:23.761 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:23.761 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:23.761 ************************************ 00:17:23.761 END TEST nvmf_bdevio_no_huge 00:17:23.761 ************************************ 00:17:23.761 09:01:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:23.761 09:01:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:17:23.761 09:01:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:23.761 09:01:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:23.761 ************************************ 00:17:23.761 START TEST nvmf_tls 00:17:23.761 ************************************ 00:17:23.761 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:24.021 * Looking for test storage... 00:17:24.021 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:17:24.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.021 --rc genhtml_branch_coverage=1 00:17:24.021 --rc genhtml_function_coverage=1 00:17:24.021 --rc genhtml_legend=1 00:17:24.021 --rc geninfo_all_blocks=1 00:17:24.021 --rc geninfo_unexecuted_blocks=1 00:17:24.021 00:17:24.021 ' 00:17:24.021 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:17:24.021 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.021 --rc genhtml_branch_coverage=1 00:17:24.021 --rc genhtml_function_coverage=1 00:17:24.021 --rc genhtml_legend=1 00:17:24.021 --rc geninfo_all_blocks=1 00:17:24.021 --rc geninfo_unexecuted_blocks=1 00:17:24.021 00:17:24.021 ' 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:17:24.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.022 --rc genhtml_branch_coverage=1 00:17:24.022 --rc genhtml_function_coverage=1 00:17:24.022 --rc genhtml_legend=1 00:17:24.022 --rc geninfo_all_blocks=1 00:17:24.022 --rc geninfo_unexecuted_blocks=1 00:17:24.022 00:17:24.022 ' 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:17:24.022 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:24.022 --rc genhtml_branch_coverage=1 00:17:24.022 --rc genhtml_function_coverage=1 00:17:24.022 --rc genhtml_legend=1 00:17:24.022 --rc geninfo_all_blocks=1 00:17:24.022 --rc geninfo_unexecuted_blocks=1 00:17:24.022 00:17:24.022 ' 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@50 -- # : 0 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:17:24.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@54 -- # have_pci_nics=0 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # prepare_net_devs 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # local -g is_hw=no 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # remove_target_ns 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_target_ns 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # xtrace_disable 00:17:24.022 09:01:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@131 -- # pci_devs=() 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@131 -- # local -a pci_devs 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@132 -- # pci_net_devs=() 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@133 -- # pci_drivers=() 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@133 -- # local -A pci_drivers 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@135 -- # net_devs=() 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@135 -- # local -ga net_devs 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@136 -- # e810=() 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@136 -- # local -ga e810 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@137 -- # x722=() 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@137 -- # local -ga x722 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@138 -- # mlx=() 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@138 -- # local -ga mlx 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:17:30.594 Found 0000:86:00.0 (0x8086 - 0x159b) 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:17:30.594 Found 0000:86:00.1 (0x8086 - 0x159b) 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # [[ up == up ]] 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:17:30.594 Found net devices under 0000:86:00.0: cvl_0_0 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # [[ up == up ]] 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:17:30.594 Found net devices under 0000:86:00.1: cvl_0_1 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # is_hw=yes 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@257 -- # create_target_ns 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@27 -- # local -gA dev_map 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@28 -- # local -g _dev 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:17:30.594 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # ips=() 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772161 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:17:30.595 10.0.0.1 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772162 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:17:30.595 10.0.0.2 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@38 -- # ping_ips 1 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=initiator0 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:17:30.595 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:30.595 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.404 ms 00:17:30.595 00:17:30.595 --- 10.0.0.1 ping statistics --- 00:17:30.595 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.595 rtt min/avg/max/mdev = 0.404/0.404/0.404/0.000 ms 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=target0 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:30.595 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:17:30.596 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:30.596 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.223 ms 00:17:30.596 00:17:30.596 --- 10.0.0.2 ping statistics --- 00:17:30.596 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:30.596 rtt min/avg/max/mdev = 0.223/0.223/0.223/0.000 ms 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # (( pair++ )) 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # return 0 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=initiator0 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=initiator1 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # return 1 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev= 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@169 -- # return 0 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev target0 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=target0 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:17:30.596 09:01:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev target1 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=target1 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # return 1 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev= 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@169 -- # return 0 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=2348213 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 2348213 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2348213 ']' 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:30.596 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:30.596 [2024-11-20 09:01:46.103683] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:17:30.596 [2024-11-20 09:01:46.103727] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:30.596 [2024-11-20 09:01:46.185034] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:30.596 [2024-11-20 09:01:46.225715] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:30.596 [2024-11-20 09:01:46.225752] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:30.596 [2024-11-20 09:01:46.225759] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:30.596 [2024-11-20 09:01:46.225765] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:30.596 [2024-11-20 09:01:46.225770] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:30.596 [2024-11-20 09:01:46.226355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:30.596 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:30.597 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:30.597 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:30.597 true 00:17:30.597 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:30.597 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@69 -- # jq -r .tls_version 00:17:30.855 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@69 -- # version=0 00:17:30.855 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # [[ 0 != \0 ]] 00:17:30.855 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:30.855 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:30.855 09:01:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@77 -- # jq -r .tls_version 00:17:31.113 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@77 -- # version=13 00:17:31.113 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@78 -- # [[ 13 != \1\3 ]] 00:17:31.113 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:31.371 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:31.371 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@85 -- # jq -r .tls_version 00:17:31.630 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@85 -- # version=7 00:17:31.630 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@86 -- # [[ 7 != \7 ]] 00:17:31.630 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:31.630 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@92 -- # jq -r .enable_ktls 00:17:31.630 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@92 -- # ktls=false 00:17:31.630 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@93 -- # [[ false != \f\a\l\s\e ]] 00:17:31.630 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:31.889 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:31.889 09:01:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@100 -- # jq -r .enable_ktls 00:17:32.148 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@100 -- # ktls=true 00:17:32.148 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@101 -- # [[ true != \t\r\u\e ]] 00:17:32.148 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:32.406 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:32.407 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@108 -- # jq -r .enable_ktls 00:17:32.407 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@108 -- # ktls=false 00:17:32.407 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@109 -- # [[ false != \f\a\l\s\e ]] 00:17:32.407 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:32.407 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:32.407 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:17:32.407 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:17:32.407 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:17:32.407 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=1 00:17:32.407 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:17:32.407 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:32.407 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@115 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:32.407 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:32.407 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:17:32.407 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:17:32.407 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=ffeeddccbbaa99887766554433221100 00:17:32.407 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=1 00:17:32.407 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:17:32.665 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@115 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:32.665 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@117 -- # mktemp 00:17:32.666 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@117 -- # key_path=/tmp/tmp.Ne859tonNo 00:17:32.666 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # mktemp 00:17:32.666 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key_2_path=/tmp/tmp.kWQxPKaM4d 00:17:32.666 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:32.666 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:32.666 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # chmod 0600 /tmp/tmp.Ne859tonNo 00:17:32.666 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # chmod 0600 /tmp/tmp.kWQxPKaM4d 00:17:32.666 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:32.666 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:32.924 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # setup_nvmf_tgt /tmp/tmp.Ne859tonNo 00:17:32.924 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.Ne859tonNo 00:17:32.924 09:01:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:33.183 [2024-11-20 09:01:49.116896] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:33.183 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:33.441 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:33.701 [2024-11-20 09:01:49.485833] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:33.701 [2024-11-20 09:01:49.486028] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:33.701 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:33.701 malloc0 00:17:33.701 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:33.960 09:01:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.Ne859tonNo 00:17:34.219 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:17:34.478 09:01:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.Ne859tonNo 00:17:44.458 Initializing NVMe Controllers 00:17:44.458 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:44.458 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:44.458 Initialization complete. Launching workers. 00:17:44.458 ======================================================== 00:17:44.458 Latency(us) 00:17:44.458 Device Information : IOPS MiB/s Average min max 00:17:44.458 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16422.79 64.15 3897.18 839.93 5635.86 00:17:44.458 ======================================================== 00:17:44.458 Total : 16422.79 64.15 3897.18 839.93 5635.86 00:17:44.458 00:17:44.458 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@139 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ne859tonNo 00:17:44.458 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:44.458 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:44.458 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:44.458 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Ne859tonNo 00:17:44.458 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:44.458 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2350648 00:17:44.458 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:44.458 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:44.458 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2350648 /var/tmp/bdevperf.sock 00:17:44.458 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2350648 ']' 00:17:44.458 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:44.458 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:44.458 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:44.458 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:44.458 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:44.458 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:44.458 [2024-11-20 09:02:00.439375] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:17:44.458 [2024-11-20 09:02:00.439424] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2350648 ] 00:17:44.718 [2024-11-20 09:02:00.512968] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.718 [2024-11-20 09:02:00.555347] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:44.718 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:44.718 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:44.718 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ne859tonNo 00:17:44.977 09:02:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:44.977 [2024-11-20 09:02:01.011132] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:45.250 TLSTESTn1 00:17:45.250 09:02:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:45.250 Running I/O for 10 seconds... 00:17:47.594 5576.00 IOPS, 21.78 MiB/s [2024-11-20T08:02:04.571Z] 5652.00 IOPS, 22.08 MiB/s [2024-11-20T08:02:05.508Z] 5604.33 IOPS, 21.89 MiB/s [2024-11-20T08:02:06.515Z] 5570.50 IOPS, 21.76 MiB/s [2024-11-20T08:02:07.511Z] 5556.00 IOPS, 21.70 MiB/s [2024-11-20T08:02:08.447Z] 5546.17 IOPS, 21.66 MiB/s [2024-11-20T08:02:09.387Z] 5541.29 IOPS, 21.65 MiB/s [2024-11-20T08:02:10.325Z] 5524.12 IOPS, 21.58 MiB/s [2024-11-20T08:02:11.262Z] 5505.89 IOPS, 21.51 MiB/s [2024-11-20T08:02:11.262Z] 5506.90 IOPS, 21.51 MiB/s 00:17:55.221 Latency(us) 00:17:55.221 [2024-11-20T08:02:11.262Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.221 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:55.221 Verification LBA range: start 0x0 length 0x2000 00:17:55.221 TLSTESTn1 : 10.01 5512.53 21.53 0.00 0.00 23186.17 5442.34 21313.45 00:17:55.221 [2024-11-20T08:02:11.262Z] =================================================================================================================== 00:17:55.221 [2024-11-20T08:02:11.262Z] Total : 5512.53 21.53 0.00 0.00 23186.17 5442.34 21313.45 00:17:55.221 { 00:17:55.221 "results": [ 00:17:55.221 { 00:17:55.221 "job": "TLSTESTn1", 00:17:55.221 "core_mask": "0x4", 00:17:55.221 "workload": "verify", 00:17:55.221 "status": "finished", 00:17:55.221 "verify_range": { 00:17:55.221 "start": 0, 00:17:55.221 "length": 8192 00:17:55.221 }, 00:17:55.221 "queue_depth": 128, 00:17:55.221 "io_size": 4096, 00:17:55.221 "runtime": 10.012824, 00:17:55.221 "iops": 5512.53073059109, 00:17:55.221 "mibps": 21.533323166371446, 00:17:55.221 "io_failed": 0, 00:17:55.221 "io_timeout": 0, 00:17:55.221 "avg_latency_us": 23186.165201889235, 00:17:55.221 "min_latency_us": 5442.337391304348, 00:17:55.221 "max_latency_us": 21313.44695652174 00:17:55.221 } 00:17:55.221 ], 00:17:55.221 "core_count": 1 00:17:55.221 } 00:17:55.221 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:55.221 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2350648 00:17:55.221 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2350648 ']' 00:17:55.221 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2350648 00:17:55.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:55.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:55.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2350648 00:17:55.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:55.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:55.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2350648' 00:17:55.481 killing process with pid 2350648 00:17:55.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2350648 00:17:55.481 Received shutdown signal, test time was about 10.000000 seconds 00:17:55.481 00:17:55.481 Latency(us) 00:17:55.481 [2024-11-20T08:02:11.522Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:55.481 [2024-11-20T08:02:11.522Z] =================================================================================================================== 00:17:55.481 [2024-11-20T08:02:11.522Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:55.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2350648 00:17:55.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@142 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kWQxPKaM4d 00:17:55.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:55.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kWQxPKaM4d 00:17:55.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:55.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:55.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:55.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.kWQxPKaM4d 00:17:55.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:55.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:55.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:55.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.kWQxPKaM4d 00:17:55.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:55.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:55.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2352414 00:17:55.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:55.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2352414 /var/tmp/bdevperf.sock 00:17:55.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2352414 ']' 00:17:55.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:55.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:55.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:55.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:55.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:55.481 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:55.481 [2024-11-20 09:02:11.516218] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:17:55.481 [2024-11-20 09:02:11.516268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2352414 ] 00:17:55.740 [2024-11-20 09:02:11.591514] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:55.740 [2024-11-20 09:02:11.629605] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:55.740 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:55.740 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:55.740 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.kWQxPKaM4d 00:17:55.998 09:02:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:56.257 [2024-11-20 09:02:12.108749] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:56.257 [2024-11-20 09:02:12.113852] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:56.257 [2024-11-20 09:02:12.114209] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b78170 (107): Transport endpoint is not connected 00:17:56.257 [2024-11-20 09:02:12.115202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b78170 (9): Bad file descriptor 00:17:56.257 [2024-11-20 09:02:12.116203] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:17:56.257 [2024-11-20 09:02:12.116212] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:56.257 [2024-11-20 09:02:12.116219] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:56.257 [2024-11-20 09:02:12.116230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:17:56.257 request: 00:17:56.257 { 00:17:56.257 "name": "TLSTEST", 00:17:56.257 "trtype": "tcp", 00:17:56.257 "traddr": "10.0.0.2", 00:17:56.257 "adrfam": "ipv4", 00:17:56.257 "trsvcid": "4420", 00:17:56.257 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:56.257 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:56.257 "prchk_reftag": false, 00:17:56.257 "prchk_guard": false, 00:17:56.257 "hdgst": false, 00:17:56.257 "ddgst": false, 00:17:56.257 "psk": "key0", 00:17:56.257 "allow_unrecognized_csi": false, 00:17:56.257 "method": "bdev_nvme_attach_controller", 00:17:56.257 "req_id": 1 00:17:56.257 } 00:17:56.257 Got JSON-RPC error response 00:17:56.257 response: 00:17:56.257 { 00:17:56.257 "code": -5, 00:17:56.257 "message": "Input/output error" 00:17:56.257 } 00:17:56.257 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2352414 00:17:56.257 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2352414 ']' 00:17:56.257 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2352414 00:17:56.257 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:56.258 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:56.258 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2352414 00:17:56.258 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:56.258 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:56.258 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2352414' 00:17:56.258 killing process with pid 2352414 00:17:56.258 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2352414 00:17:56.258 Received shutdown signal, test time was about 10.000000 seconds 00:17:56.258 00:17:56.258 Latency(us) 00:17:56.258 [2024-11-20T08:02:12.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:56.258 [2024-11-20T08:02:12.299Z] =================================================================================================================== 00:17:56.258 [2024-11-20T08:02:12.299Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:56.258 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2352414 00:17:56.517 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:56.517 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:56.517 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:56.517 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:56.517 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:56.517 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@145 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Ne859tonNo 00:17:56.517 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:56.517 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Ne859tonNo 00:17:56.517 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:56.517 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:56.517 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:56.517 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:56.517 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.Ne859tonNo 00:17:56.517 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:56.517 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:56.517 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:56.517 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Ne859tonNo 00:17:56.517 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:56.517 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2352645 00:17:56.517 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:56.517 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:56.517 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2352645 /var/tmp/bdevperf.sock 00:17:56.517 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2352645 ']' 00:17:56.517 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:56.517 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:56.517 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:56.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:56.517 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:56.517 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:56.517 [2024-11-20 09:02:12.399361] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:17:56.517 [2024-11-20 09:02:12.399407] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2352645 ] 00:17:56.517 [2024-11-20 09:02:12.468076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:56.517 [2024-11-20 09:02:12.505218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:56.775 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:56.775 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:56.775 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ne859tonNo 00:17:56.775 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:17:57.033 [2024-11-20 09:02:12.955914] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:57.033 [2024-11-20 09:02:12.965966] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:57.033 [2024-11-20 09:02:12.965988] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:57.033 [2024-11-20 09:02:12.966011] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:57.033 [2024-11-20 09:02:12.966394] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19fe170 (107): Transport endpoint is not connected 00:17:57.033 [2024-11-20 09:02:12.967387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19fe170 (9): Bad file descriptor 00:17:57.033 [2024-11-20 09:02:12.968389] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:17:57.033 [2024-11-20 09:02:12.968402] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:57.033 [2024-11-20 09:02:12.968409] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:17:57.033 [2024-11-20 09:02:12.968420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:17:57.033 request: 00:17:57.033 { 00:17:57.033 "name": "TLSTEST", 00:17:57.033 "trtype": "tcp", 00:17:57.033 "traddr": "10.0.0.2", 00:17:57.033 "adrfam": "ipv4", 00:17:57.033 "trsvcid": "4420", 00:17:57.033 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:57.033 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:57.033 "prchk_reftag": false, 00:17:57.033 "prchk_guard": false, 00:17:57.033 "hdgst": false, 00:17:57.033 "ddgst": false, 00:17:57.033 "psk": "key0", 00:17:57.033 "allow_unrecognized_csi": false, 00:17:57.033 "method": "bdev_nvme_attach_controller", 00:17:57.033 "req_id": 1 00:17:57.033 } 00:17:57.033 Got JSON-RPC error response 00:17:57.033 response: 00:17:57.033 { 00:17:57.033 "code": -5, 00:17:57.033 "message": "Input/output error" 00:17:57.033 } 00:17:57.033 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2352645 00:17:57.033 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2352645 ']' 00:17:57.033 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2352645 00:17:57.033 09:02:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:57.033 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:57.033 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2352645 00:17:57.033 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:57.033 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:57.033 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2352645' 00:17:57.033 killing process with pid 2352645 00:17:57.033 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2352645 00:17:57.033 Received shutdown signal, test time was about 10.000000 seconds 00:17:57.033 00:17:57.033 Latency(us) 00:17:57.033 [2024-11-20T08:02:13.074Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:57.033 [2024-11-20T08:02:13.074Z] =================================================================================================================== 00:17:57.033 [2024-11-20T08:02:13.074Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:57.033 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2352645 00:17:57.292 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:57.292 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:57.292 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:57.292 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:57.292 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:57.292 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@148 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ne859tonNo 00:17:57.292 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:57.292 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ne859tonNo 00:17:57.292 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:57.292 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.292 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:57.292 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:57.292 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.Ne859tonNo 00:17:57.292 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:57.292 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:57.292 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:57.292 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.Ne859tonNo 00:17:57.292 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:57.292 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2352782 00:17:57.292 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:57.292 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:57.292 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2352782 /var/tmp/bdevperf.sock 00:17:57.292 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2352782 ']' 00:17:57.292 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:57.292 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:57.292 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:57.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:57.292 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:57.292 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:57.292 [2024-11-20 09:02:13.250153] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:17:57.292 [2024-11-20 09:02:13.250205] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2352782 ] 00:17:57.292 [2024-11-20 09:02:13.316114] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:57.551 [2024-11-20 09:02:13.354245] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:57.551 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:57.551 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:57.551 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.Ne859tonNo 00:17:57.810 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:57.810 [2024-11-20 09:02:13.817132] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:57.810 [2024-11-20 09:02:13.828151] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:57.810 [2024-11-20 09:02:13.828171] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:57.810 [2024-11-20 09:02:13.828194] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:57.810 [2024-11-20 09:02:13.828611] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf6170 (107): Transport endpoint is not connected 00:17:57.810 [2024-11-20 09:02:13.829605] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bf6170 (9): Bad file descriptor 00:17:57.810 [2024-11-20 09:02:13.830606] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:17:57.810 [2024-11-20 09:02:13.830616] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:57.810 [2024-11-20 09:02:13.830623] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:17:57.810 [2024-11-20 09:02:13.830634] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:17:57.810 request: 00:17:57.810 { 00:17:57.810 "name": "TLSTEST", 00:17:57.810 "trtype": "tcp", 00:17:57.810 "traddr": "10.0.0.2", 00:17:57.810 "adrfam": "ipv4", 00:17:57.810 "trsvcid": "4420", 00:17:57.810 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:57.810 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:57.810 "prchk_reftag": false, 00:17:57.810 "prchk_guard": false, 00:17:57.810 "hdgst": false, 00:17:57.810 "ddgst": false, 00:17:57.810 "psk": "key0", 00:17:57.810 "allow_unrecognized_csi": false, 00:17:57.810 "method": "bdev_nvme_attach_controller", 00:17:57.810 "req_id": 1 00:17:57.810 } 00:17:57.810 Got JSON-RPC error response 00:17:57.810 response: 00:17:57.810 { 00:17:57.810 "code": -5, 00:17:57.810 "message": "Input/output error" 00:17:57.810 } 00:17:58.069 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2352782 00:17:58.069 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2352782 ']' 00:17:58.069 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2352782 00:17:58.069 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:58.069 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.069 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2352782 00:17:58.069 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:58.069 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:58.069 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2352782' 00:17:58.069 killing process with pid 2352782 00:17:58.069 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2352782 00:17:58.069 Received shutdown signal, test time was about 10.000000 seconds 00:17:58.069 00:17:58.069 Latency(us) 00:17:58.069 [2024-11-20T08:02:14.110Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.069 [2024-11-20T08:02:14.110Z] =================================================================================================================== 00:17:58.069 [2024-11-20T08:02:14.110Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:58.069 09:02:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2352782 00:17:58.069 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:58.069 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:58.069 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:58.069 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:58.069 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:58.069 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@151 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:58.069 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:17:58.069 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:58.069 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:17:58.069 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.069 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:17:58.069 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:58.069 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:58.069 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:58.069 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:58.069 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:58.069 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:58.069 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:58.069 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2352900 00:17:58.069 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:58.069 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:58.069 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2352900 /var/tmp/bdevperf.sock 00:17:58.069 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2352900 ']' 00:17:58.069 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:58.069 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:58.069 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:58.069 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:58.069 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:58.069 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:58.328 [2024-11-20 09:02:14.111268] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:17:58.328 [2024-11-20 09:02:14.111317] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2352900 ] 00:17:58.328 [2024-11-20 09:02:14.174430] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.328 [2024-11-20 09:02:14.211835] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.329 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:58.329 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:58.329 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:17:58.587 [2024-11-20 09:02:14.485900] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:17:58.587 [2024-11-20 09:02:14.485932] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:17:58.587 request: 00:17:58.587 { 00:17:58.587 "name": "key0", 00:17:58.587 "path": "", 00:17:58.587 "method": "keyring_file_add_key", 00:17:58.587 "req_id": 1 00:17:58.587 } 00:17:58.587 Got JSON-RPC error response 00:17:58.587 response: 00:17:58.587 { 00:17:58.587 "code": -1, 00:17:58.587 "message": "Operation not permitted" 00:17:58.587 } 00:17:58.587 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:17:58.847 [2024-11-20 09:02:14.694533] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:58.847 [2024-11-20 09:02:14.694559] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:17:58.847 request: 00:17:58.847 { 00:17:58.847 "name": "TLSTEST", 00:17:58.847 "trtype": "tcp", 00:17:58.847 "traddr": "10.0.0.2", 00:17:58.847 "adrfam": "ipv4", 00:17:58.847 "trsvcid": "4420", 00:17:58.847 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:58.847 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:58.847 "prchk_reftag": false, 00:17:58.847 "prchk_guard": false, 00:17:58.847 "hdgst": false, 00:17:58.847 "ddgst": false, 00:17:58.847 "psk": "key0", 00:17:58.847 "allow_unrecognized_csi": false, 00:17:58.847 "method": "bdev_nvme_attach_controller", 00:17:58.847 "req_id": 1 00:17:58.847 } 00:17:58.847 Got JSON-RPC error response 00:17:58.847 response: 00:17:58.847 { 00:17:58.847 "code": -126, 00:17:58.847 "message": "Required key not available" 00:17:58.847 } 00:17:58.847 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2352900 00:17:58.847 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2352900 ']' 00:17:58.847 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2352900 00:17:58.847 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:58.847 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:58.847 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2352900 00:17:58.847 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:17:58.847 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:17:58.847 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2352900' 00:17:58.847 killing process with pid 2352900 00:17:58.847 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2352900 00:17:58.847 Received shutdown signal, test time was about 10.000000 seconds 00:17:58.847 00:17:58.847 Latency(us) 00:17:58.847 [2024-11-20T08:02:14.888Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.847 [2024-11-20T08:02:14.888Z] =================================================================================================================== 00:17:58.847 [2024-11-20T08:02:14.888Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:58.847 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2352900 00:17:59.106 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:17:59.106 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:17:59.106 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:59.106 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:59.106 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:59.106 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@154 -- # killprocess 2348213 00:17:59.106 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2348213 ']' 00:17:59.106 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2348213 00:17:59.106 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:17:59.106 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.106 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2348213 00:17:59.106 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:17:59.106 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:17:59.106 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2348213' 00:17:59.106 killing process with pid 2348213 00:17:59.106 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2348213 00:17:59.106 09:02:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2348213 00:17:59.365 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:59.365 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:59.365 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:17:59.365 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:17:59.365 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:59.365 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=2 00:17:59.365 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:17:59.365 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:59.365 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # mktemp 00:17:59.365 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # key_long_path=/tmp/tmp.90f9xnQ4x1 00:17:59.365 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@157 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:59.365 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # chmod 0600 /tmp/tmp.90f9xnQ4x1 00:17:59.365 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # nvmfappstart -m 0x2 00:17:59.365 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:17:59.365 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:59.365 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:59.365 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=2353146 00:17:59.366 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:59.366 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 2353146 00:17:59.366 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2353146 ']' 00:17:59.366 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:59.366 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:59.366 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:59.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:59.366 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:59.366 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:59.366 [2024-11-20 09:02:15.249274] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:17:59.366 [2024-11-20 09:02:15.249325] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:59.366 [2024-11-20 09:02:15.328873] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:59.366 [2024-11-20 09:02:15.365954] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:59.366 [2024-11-20 09:02:15.365992] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:59.366 [2024-11-20 09:02:15.365998] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:59.366 [2024-11-20 09:02:15.366004] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:59.366 [2024-11-20 09:02:15.366009] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:59.366 [2024-11-20 09:02:15.366566] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.625 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:59.625 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:17:59.625 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:17:59.625 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:59.625 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:59.625 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.625 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # setup_nvmf_tgt /tmp/tmp.90f9xnQ4x1 00:17:59.625 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.90f9xnQ4x1 00:17:59.625 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:59.884 [2024-11-20 09:02:15.674358] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.884 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:59.884 09:02:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:00.142 [2024-11-20 09:02:16.087417] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:00.142 [2024-11-20 09:02:16.087597] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:00.142 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:00.401 malloc0 00:18:00.401 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:00.660 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.90f9xnQ4x1 00:18:00.919 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:00.919 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.90f9xnQ4x1 00:18:00.919 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:00.919 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:00.919 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:00.919 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.90f9xnQ4x1 00:18:00.919 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:00.919 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:00.919 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2353405 00:18:00.919 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:00.919 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2353405 /var/tmp/bdevperf.sock 00:18:00.919 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2353405 ']' 00:18:00.919 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:00.919 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:00.919 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:00.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:00.919 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:00.919 09:02:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:00.919 [2024-11-20 09:02:16.931888] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:18:00.919 [2024-11-20 09:02:16.931935] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2353405 ] 00:18:01.177 [2024-11-20 09:02:17.002379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.177 [2024-11-20 09:02:17.043221] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:01.177 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:01.177 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:01.177 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.90f9xnQ4x1 00:18:01.435 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:01.693 [2024-11-20 09:02:17.521924] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:01.693 TLSTESTn1 00:18:01.693 09:02:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:01.693 Running I/O for 10 seconds... 00:18:04.004 5417.00 IOPS, 21.16 MiB/s [2024-11-20T08:02:20.981Z] 5464.50 IOPS, 21.35 MiB/s [2024-11-20T08:02:21.917Z] 5452.00 IOPS, 21.30 MiB/s [2024-11-20T08:02:22.856Z] 5465.25 IOPS, 21.35 MiB/s [2024-11-20T08:02:23.793Z] 5472.20 IOPS, 21.38 MiB/s [2024-11-20T08:02:24.730Z] 5461.17 IOPS, 21.33 MiB/s [2024-11-20T08:02:26.108Z] 5459.57 IOPS, 21.33 MiB/s [2024-11-20T08:02:27.043Z] 5467.38 IOPS, 21.36 MiB/s [2024-11-20T08:02:27.978Z] 5465.11 IOPS, 21.35 MiB/s [2024-11-20T08:02:27.978Z] 5445.60 IOPS, 21.27 MiB/s 00:18:11.937 Latency(us) 00:18:11.937 [2024-11-20T08:02:27.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.937 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:11.937 Verification LBA range: start 0x0 length 0x2000 00:18:11.937 TLSTESTn1 : 10.03 5442.14 21.26 0.00 0.00 23473.00 5670.29 28949.82 00:18:11.937 [2024-11-20T08:02:27.978Z] =================================================================================================================== 00:18:11.937 [2024-11-20T08:02:27.978Z] Total : 5442.14 21.26 0.00 0.00 23473.00 5670.29 28949.82 00:18:11.937 { 00:18:11.937 "results": [ 00:18:11.937 { 00:18:11.937 "job": "TLSTESTn1", 00:18:11.937 "core_mask": "0x4", 00:18:11.937 "workload": "verify", 00:18:11.937 "status": "finished", 00:18:11.937 "verify_range": { 00:18:11.937 "start": 0, 00:18:11.937 "length": 8192 00:18:11.937 }, 00:18:11.937 "queue_depth": 128, 00:18:11.937 "io_size": 4096, 00:18:11.937 "runtime": 10.02988, 00:18:11.937 "iops": 5442.138888999669, 00:18:11.937 "mibps": 21.258355035154956, 00:18:11.937 "io_failed": 0, 00:18:11.937 "io_timeout": 0, 00:18:11.937 "avg_latency_us": 23473.00409229652, 00:18:11.937 "min_latency_us": 5670.288695652174, 00:18:11.937 "max_latency_us": 28949.815652173915 00:18:11.937 } 00:18:11.937 ], 00:18:11.937 "core_count": 1 00:18:11.937 } 00:18:11.937 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:11.937 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 2353405 00:18:11.937 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2353405 ']' 00:18:11.937 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2353405 00:18:11.937 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:11.937 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:11.937 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2353405 00:18:11.937 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:11.937 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:11.937 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2353405' 00:18:11.937 killing process with pid 2353405 00:18:11.937 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2353405 00:18:11.937 Received shutdown signal, test time was about 10.000000 seconds 00:18:11.937 00:18:11.937 Latency(us) 00:18:11.937 [2024-11-20T08:02:27.978Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:11.937 [2024-11-20T08:02:27.978Z] =================================================================================================================== 00:18:11.937 [2024-11-20T08:02:27.978Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:11.937 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2353405 00:18:12.196 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # chmod 0666 /tmp/tmp.90f9xnQ4x1 00:18:12.196 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.90f9xnQ4x1 00:18:12.196 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:12.196 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.90f9xnQ4x1 00:18:12.196 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:12.196 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.196 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:12.196 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:12.196 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.90f9xnQ4x1 00:18:12.196 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:12.196 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:12.196 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:12.196 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.90f9xnQ4x1 00:18:12.196 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:12.196 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2355237 00:18:12.196 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:12.196 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:12.196 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2355237 /var/tmp/bdevperf.sock 00:18:12.196 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2355237 ']' 00:18:12.196 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:12.196 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:12.196 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:12.196 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:12.196 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:12.196 09:02:27 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:12.196 [2024-11-20 09:02:28.037693] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:18:12.196 [2024-11-20 09:02:28.037741] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2355237 ] 00:18:12.196 [2024-11-20 09:02:28.107030] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:12.196 [2024-11-20 09:02:28.144754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:12.455 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:12.455 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:12.455 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.90f9xnQ4x1 00:18:12.455 [2024-11-20 09:02:28.411218] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.90f9xnQ4x1': 0100666 00:18:12.455 [2024-11-20 09:02:28.411253] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:12.455 request: 00:18:12.455 { 00:18:12.455 "name": "key0", 00:18:12.455 "path": "/tmp/tmp.90f9xnQ4x1", 00:18:12.455 "method": "keyring_file_add_key", 00:18:12.455 "req_id": 1 00:18:12.455 } 00:18:12.455 Got JSON-RPC error response 00:18:12.455 response: 00:18:12.455 { 00:18:12.455 "code": -1, 00:18:12.455 "message": "Operation not permitted" 00:18:12.455 } 00:18:12.455 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:12.715 [2024-11-20 09:02:28.615843] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:12.715 [2024-11-20 09:02:28.615866] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:12.715 request: 00:18:12.715 { 00:18:12.715 "name": "TLSTEST", 00:18:12.715 "trtype": "tcp", 00:18:12.715 "traddr": "10.0.0.2", 00:18:12.715 "adrfam": "ipv4", 00:18:12.715 "trsvcid": "4420", 00:18:12.715 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:12.715 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:12.715 "prchk_reftag": false, 00:18:12.715 "prchk_guard": false, 00:18:12.715 "hdgst": false, 00:18:12.715 "ddgst": false, 00:18:12.715 "psk": "key0", 00:18:12.715 "allow_unrecognized_csi": false, 00:18:12.715 "method": "bdev_nvme_attach_controller", 00:18:12.715 "req_id": 1 00:18:12.715 } 00:18:12.715 Got JSON-RPC error response 00:18:12.715 response: 00:18:12.715 { 00:18:12.715 "code": -126, 00:18:12.715 "message": "Required key not available" 00:18:12.715 } 00:18:12.715 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 2355237 00:18:12.715 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2355237 ']' 00:18:12.715 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2355237 00:18:12.715 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:12.715 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:12.715 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2355237 00:18:12.715 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:12.715 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:12.715 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2355237' 00:18:12.715 killing process with pid 2355237 00:18:12.715 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2355237 00:18:12.715 Received shutdown signal, test time was about 10.000000 seconds 00:18:12.715 00:18:12.715 Latency(us) 00:18:12.715 [2024-11-20T08:02:28.756Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:12.715 [2024-11-20T08:02:28.756Z] =================================================================================================================== 00:18:12.715 [2024-11-20T08:02:28.756Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:12.715 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2355237 00:18:12.974 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:12.974 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:12.974 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:12.974 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:12.974 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:12.974 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # killprocess 2353146 00:18:12.974 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2353146 ']' 00:18:12.974 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2353146 00:18:12.974 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:12.974 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:12.974 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2353146 00:18:12.974 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:12.974 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:12.974 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2353146' 00:18:12.974 killing process with pid 2353146 00:18:12.974 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2353146 00:18:12.974 09:02:28 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2353146 00:18:13.233 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # nvmfappstart -m 0x2 00:18:13.233 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:18:13.233 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:13.233 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.233 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=2355479 00:18:13.233 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:13.233 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 2355479 00:18:13.233 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2355479 ']' 00:18:13.233 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.233 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.233 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.233 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.233 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.233 [2024-11-20 09:02:29.128191] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:18:13.233 [2024-11-20 09:02:29.128239] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:13.233 [2024-11-20 09:02:29.208081] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:13.233 [2024-11-20 09:02:29.248338] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:13.233 [2024-11-20 09:02:29.248375] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:13.233 [2024-11-20 09:02:29.248383] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:13.233 [2024-11-20 09:02:29.248389] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:13.233 [2024-11-20 09:02:29.248394] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:13.233 [2024-11-20 09:02:29.248976] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:13.492 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:13.492 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:13.492 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:18:13.492 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:13.492 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:13.492 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:13.492 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@173 -- # NOT setup_nvmf_tgt /tmp/tmp.90f9xnQ4x1 00:18:13.492 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:13.492 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.90f9xnQ4x1 00:18:13.492 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:18:13.492 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.492 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:18:13.492 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:13.492 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.90f9xnQ4x1 00:18:13.492 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.90f9xnQ4x1 00:18:13.492 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:13.751 [2024-11-20 09:02:29.551754] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:13.751 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:13.751 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:14.010 [2024-11-20 09:02:29.932729] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:14.010 [2024-11-20 09:02:29.932941] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:14.010 09:02:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:14.268 malloc0 00:18:14.268 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:14.527 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.90f9xnQ4x1 00:18:14.527 [2024-11-20 09:02:30.550512] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.90f9xnQ4x1': 0100666 00:18:14.527 [2024-11-20 09:02:30.550540] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:14.527 request: 00:18:14.527 { 00:18:14.527 "name": "key0", 00:18:14.527 "path": "/tmp/tmp.90f9xnQ4x1", 00:18:14.527 "method": "keyring_file_add_key", 00:18:14.527 "req_id": 1 00:18:14.527 } 00:18:14.527 Got JSON-RPC error response 00:18:14.527 response: 00:18:14.527 { 00:18:14.527 "code": -1, 00:18:14.527 "message": "Operation not permitted" 00:18:14.527 } 00:18:14.786 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:14.786 [2024-11-20 09:02:30.735007] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:18:14.786 [2024-11-20 09:02:30.735041] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:14.786 request: 00:18:14.786 { 00:18:14.786 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:14.786 "host": "nqn.2016-06.io.spdk:host1", 00:18:14.786 "psk": "key0", 00:18:14.786 "method": "nvmf_subsystem_add_host", 00:18:14.786 "req_id": 1 00:18:14.786 } 00:18:14.786 Got JSON-RPC error response 00:18:14.786 response: 00:18:14.786 { 00:18:14.786 "code": -32603, 00:18:14.786 "message": "Internal error" 00:18:14.786 } 00:18:14.786 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:14.786 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:14.786 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:14.786 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:14.786 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # killprocess 2355479 00:18:14.786 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2355479 ']' 00:18:14.786 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2355479 00:18:14.786 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:14.786 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:14.786 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2355479 00:18:14.786 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:14.786 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:14.786 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2355479' 00:18:14.786 killing process with pid 2355479 00:18:14.786 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2355479 00:18:14.786 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2355479 00:18:15.046 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # chmod 0600 /tmp/tmp.90f9xnQ4x1 00:18:15.046 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # nvmfappstart -m 0x2 00:18:15.046 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:18:15.046 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:15.046 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.046 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:15.046 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=2355742 00:18:15.046 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 2355742 00:18:15.046 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2355742 ']' 00:18:15.046 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:15.046 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:15.046 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:15.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:15.046 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:15.046 09:02:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.046 [2024-11-20 09:02:31.025434] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:18:15.046 [2024-11-20 09:02:31.025482] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:15.305 [2024-11-20 09:02:31.104092] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:15.305 [2024-11-20 09:02:31.142880] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:15.306 [2024-11-20 09:02:31.142915] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:15.306 [2024-11-20 09:02:31.142921] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:15.306 [2024-11-20 09:02:31.142927] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:15.306 [2024-11-20 09:02:31.142932] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:15.306 [2024-11-20 09:02:31.143486] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:15.306 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:15.306 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:15.306 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:18:15.306 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:15.306 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:15.306 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:15.306 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # setup_nvmf_tgt /tmp/tmp.90f9xnQ4x1 00:18:15.306 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.90f9xnQ4x1 00:18:15.306 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:15.565 [2024-11-20 09:02:31.457956] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:15.565 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:15.823 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:16.082 [2024-11-20 09:02:31.871044] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:16.082 [2024-11-20 09:02:31.871226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:16.082 09:02:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:16.082 malloc0 00:18:16.082 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:16.341 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.90f9xnQ4x1 00:18:16.600 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:16.859 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@183 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:16.859 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # bdevperf_pid=2356004 00:18:16.859 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:16.859 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # waitforlisten 2356004 /var/tmp/bdevperf.sock 00:18:16.859 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2356004 ']' 00:18:16.859 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:16.859 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:16.859 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:16.859 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:16.859 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:16.859 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:16.859 [2024-11-20 09:02:32.718099] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:18:16.859 [2024-11-20 09:02:32.718153] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2356004 ] 00:18:16.859 [2024-11-20 09:02:32.795675] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:16.859 [2024-11-20 09:02:32.836478] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:17.118 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:17.118 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:17.118 09:02:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.90f9xnQ4x1 00:18:17.118 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:17.376 [2024-11-20 09:02:33.311185] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:17.376 TLSTESTn1 00:18:17.376 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:17.944 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # tgtconf='{ 00:18:17.944 "subsystems": [ 00:18:17.944 { 00:18:17.944 "subsystem": "keyring", 00:18:17.944 "config": [ 00:18:17.944 { 00:18:17.944 "method": "keyring_file_add_key", 00:18:17.944 "params": { 00:18:17.944 "name": "key0", 00:18:17.944 "path": "/tmp/tmp.90f9xnQ4x1" 00:18:17.944 } 00:18:17.944 } 00:18:17.944 ] 00:18:17.944 }, 00:18:17.944 { 00:18:17.944 "subsystem": "iobuf", 00:18:17.944 "config": [ 00:18:17.944 { 00:18:17.944 "method": "iobuf_set_options", 00:18:17.944 "params": { 00:18:17.944 "small_pool_count": 8192, 00:18:17.944 "large_pool_count": 1024, 00:18:17.944 "small_bufsize": 8192, 00:18:17.944 "large_bufsize": 135168, 00:18:17.944 "enable_numa": false 00:18:17.944 } 00:18:17.944 } 00:18:17.944 ] 00:18:17.944 }, 00:18:17.944 { 00:18:17.944 "subsystem": "sock", 00:18:17.944 "config": [ 00:18:17.944 { 00:18:17.944 "method": "sock_set_default_impl", 00:18:17.944 "params": { 00:18:17.944 "impl_name": "posix" 00:18:17.944 } 00:18:17.944 }, 00:18:17.944 { 00:18:17.944 "method": "sock_impl_set_options", 00:18:17.944 "params": { 00:18:17.944 "impl_name": "ssl", 00:18:17.944 "recv_buf_size": 4096, 00:18:17.944 "send_buf_size": 4096, 00:18:17.944 "enable_recv_pipe": true, 00:18:17.944 "enable_quickack": false, 00:18:17.944 "enable_placement_id": 0, 00:18:17.944 "enable_zerocopy_send_server": true, 00:18:17.944 "enable_zerocopy_send_client": false, 00:18:17.944 "zerocopy_threshold": 0, 00:18:17.944 "tls_version": 0, 00:18:17.944 "enable_ktls": false 00:18:17.944 } 00:18:17.944 }, 00:18:17.944 { 00:18:17.944 "method": "sock_impl_set_options", 00:18:17.944 "params": { 00:18:17.944 "impl_name": "posix", 00:18:17.944 "recv_buf_size": 2097152, 00:18:17.944 "send_buf_size": 2097152, 00:18:17.944 "enable_recv_pipe": true, 00:18:17.944 "enable_quickack": false, 00:18:17.944 "enable_placement_id": 0, 00:18:17.944 "enable_zerocopy_send_server": true, 00:18:17.944 "enable_zerocopy_send_client": false, 00:18:17.944 "zerocopy_threshold": 0, 00:18:17.944 "tls_version": 0, 00:18:17.944 "enable_ktls": false 00:18:17.944 } 00:18:17.944 } 00:18:17.944 ] 00:18:17.944 }, 00:18:17.944 { 00:18:17.944 "subsystem": "vmd", 00:18:17.944 "config": [] 00:18:17.944 }, 00:18:17.944 { 00:18:17.944 "subsystem": "accel", 00:18:17.944 "config": [ 00:18:17.944 { 00:18:17.944 "method": "accel_set_options", 00:18:17.944 "params": { 00:18:17.944 "small_cache_size": 128, 00:18:17.944 "large_cache_size": 16, 00:18:17.944 "task_count": 2048, 00:18:17.944 "sequence_count": 2048, 00:18:17.944 "buf_count": 2048 00:18:17.944 } 00:18:17.944 } 00:18:17.944 ] 00:18:17.944 }, 00:18:17.944 { 00:18:17.944 "subsystem": "bdev", 00:18:17.944 "config": [ 00:18:17.944 { 00:18:17.944 "method": "bdev_set_options", 00:18:17.944 "params": { 00:18:17.944 "bdev_io_pool_size": 65535, 00:18:17.944 "bdev_io_cache_size": 256, 00:18:17.944 "bdev_auto_examine": true, 00:18:17.944 "iobuf_small_cache_size": 128, 00:18:17.944 "iobuf_large_cache_size": 16 00:18:17.944 } 00:18:17.944 }, 00:18:17.944 { 00:18:17.944 "method": "bdev_raid_set_options", 00:18:17.944 "params": { 00:18:17.944 "process_window_size_kb": 1024, 00:18:17.944 "process_max_bandwidth_mb_sec": 0 00:18:17.944 } 00:18:17.944 }, 00:18:17.944 { 00:18:17.944 "method": "bdev_iscsi_set_options", 00:18:17.944 "params": { 00:18:17.944 "timeout_sec": 30 00:18:17.944 } 00:18:17.944 }, 00:18:17.944 { 00:18:17.944 "method": "bdev_nvme_set_options", 00:18:17.944 "params": { 00:18:17.944 "action_on_timeout": "none", 00:18:17.944 "timeout_us": 0, 00:18:17.944 "timeout_admin_us": 0, 00:18:17.945 "keep_alive_timeout_ms": 10000, 00:18:17.945 "arbitration_burst": 0, 00:18:17.945 "low_priority_weight": 0, 00:18:17.945 "medium_priority_weight": 0, 00:18:17.945 "high_priority_weight": 0, 00:18:17.945 "nvme_adminq_poll_period_us": 10000, 00:18:17.945 "nvme_ioq_poll_period_us": 0, 00:18:17.945 "io_queue_requests": 0, 00:18:17.945 "delay_cmd_submit": true, 00:18:17.945 "transport_retry_count": 4, 00:18:17.945 "bdev_retry_count": 3, 00:18:17.945 "transport_ack_timeout": 0, 00:18:17.945 "ctrlr_loss_timeout_sec": 0, 00:18:17.945 "reconnect_delay_sec": 0, 00:18:17.945 "fast_io_fail_timeout_sec": 0, 00:18:17.945 "disable_auto_failback": false, 00:18:17.945 "generate_uuids": false, 00:18:17.945 "transport_tos": 0, 00:18:17.945 "nvme_error_stat": false, 00:18:17.945 "rdma_srq_size": 0, 00:18:17.945 "io_path_stat": false, 00:18:17.945 "allow_accel_sequence": false, 00:18:17.945 "rdma_max_cq_size": 0, 00:18:17.945 "rdma_cm_event_timeout_ms": 0, 00:18:17.945 "dhchap_digests": [ 00:18:17.945 "sha256", 00:18:17.945 "sha384", 00:18:17.945 "sha512" 00:18:17.945 ], 00:18:17.945 "dhchap_dhgroups": [ 00:18:17.945 "null", 00:18:17.945 "ffdhe2048", 00:18:17.945 "ffdhe3072", 00:18:17.945 "ffdhe4096", 00:18:17.945 "ffdhe6144", 00:18:17.945 "ffdhe8192" 00:18:17.945 ] 00:18:17.945 } 00:18:17.945 }, 00:18:17.945 { 00:18:17.945 "method": "bdev_nvme_set_hotplug", 00:18:17.945 "params": { 00:18:17.945 "period_us": 100000, 00:18:17.945 "enable": false 00:18:17.945 } 00:18:17.945 }, 00:18:17.945 { 00:18:17.945 "method": "bdev_malloc_create", 00:18:17.945 "params": { 00:18:17.945 "name": "malloc0", 00:18:17.945 "num_blocks": 8192, 00:18:17.945 "block_size": 4096, 00:18:17.945 "physical_block_size": 4096, 00:18:17.945 "uuid": "8e0b0f6b-31e9-4bba-a453-cf77077efd1f", 00:18:17.945 "optimal_io_boundary": 0, 00:18:17.945 "md_size": 0, 00:18:17.945 "dif_type": 0, 00:18:17.945 "dif_is_head_of_md": false, 00:18:17.945 "dif_pi_format": 0 00:18:17.945 } 00:18:17.945 }, 00:18:17.945 { 00:18:17.945 "method": "bdev_wait_for_examine" 00:18:17.945 } 00:18:17.945 ] 00:18:17.945 }, 00:18:17.945 { 00:18:17.945 "subsystem": "nbd", 00:18:17.945 "config": [] 00:18:17.945 }, 00:18:17.945 { 00:18:17.945 "subsystem": "scheduler", 00:18:17.945 "config": [ 00:18:17.945 { 00:18:17.945 "method": "framework_set_scheduler", 00:18:17.945 "params": { 00:18:17.945 "name": "static" 00:18:17.945 } 00:18:17.945 } 00:18:17.945 ] 00:18:17.945 }, 00:18:17.945 { 00:18:17.945 "subsystem": "nvmf", 00:18:17.945 "config": [ 00:18:17.945 { 00:18:17.945 "method": "nvmf_set_config", 00:18:17.945 "params": { 00:18:17.945 "discovery_filter": "match_any", 00:18:17.945 "admin_cmd_passthru": { 00:18:17.945 "identify_ctrlr": false 00:18:17.945 }, 00:18:17.945 "dhchap_digests": [ 00:18:17.945 "sha256", 00:18:17.945 "sha384", 00:18:17.945 "sha512" 00:18:17.945 ], 00:18:17.945 "dhchap_dhgroups": [ 00:18:17.945 "null", 00:18:17.945 "ffdhe2048", 00:18:17.945 "ffdhe3072", 00:18:17.945 "ffdhe4096", 00:18:17.945 "ffdhe6144", 00:18:17.945 "ffdhe8192" 00:18:17.945 ] 00:18:17.945 } 00:18:17.945 }, 00:18:17.945 { 00:18:17.945 "method": "nvmf_set_max_subsystems", 00:18:17.945 "params": { 00:18:17.945 "max_subsystems": 1024 00:18:17.945 } 00:18:17.945 }, 00:18:17.945 { 00:18:17.945 "method": "nvmf_set_crdt", 00:18:17.945 "params": { 00:18:17.945 "crdt1": 0, 00:18:17.945 "crdt2": 0, 00:18:17.945 "crdt3": 0 00:18:17.945 } 00:18:17.945 }, 00:18:17.945 { 00:18:17.945 "method": "nvmf_create_transport", 00:18:17.945 "params": { 00:18:17.945 "trtype": "TCP", 00:18:17.945 "max_queue_depth": 128, 00:18:17.945 "max_io_qpairs_per_ctrlr": 127, 00:18:17.945 "in_capsule_data_size": 4096, 00:18:17.945 "max_io_size": 131072, 00:18:17.945 "io_unit_size": 131072, 00:18:17.945 "max_aq_depth": 128, 00:18:17.945 "num_shared_buffers": 511, 00:18:17.945 "buf_cache_size": 4294967295, 00:18:17.945 "dif_insert_or_strip": false, 00:18:17.945 "zcopy": false, 00:18:17.945 "c2h_success": false, 00:18:17.945 "sock_priority": 0, 00:18:17.945 "abort_timeout_sec": 1, 00:18:17.945 "ack_timeout": 0, 00:18:17.945 "data_wr_pool_size": 0 00:18:17.945 } 00:18:17.945 }, 00:18:17.945 { 00:18:17.945 "method": "nvmf_create_subsystem", 00:18:17.945 "params": { 00:18:17.945 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.945 "allow_any_host": false, 00:18:17.945 "serial_number": "SPDK00000000000001", 00:18:17.945 "model_number": "SPDK bdev Controller", 00:18:17.945 "max_namespaces": 10, 00:18:17.945 "min_cntlid": 1, 00:18:17.945 "max_cntlid": 65519, 00:18:17.945 "ana_reporting": false 00:18:17.945 } 00:18:17.945 }, 00:18:17.945 { 00:18:17.945 "method": "nvmf_subsystem_add_host", 00:18:17.945 "params": { 00:18:17.945 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.945 "host": "nqn.2016-06.io.spdk:host1", 00:18:17.945 "psk": "key0" 00:18:17.945 } 00:18:17.945 }, 00:18:17.945 { 00:18:17.945 "method": "nvmf_subsystem_add_ns", 00:18:17.945 "params": { 00:18:17.945 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.945 "namespace": { 00:18:17.945 "nsid": 1, 00:18:17.945 "bdev_name": "malloc0", 00:18:17.945 "nguid": "8E0B0F6B31E94BBAA453CF77077EFD1F", 00:18:17.945 "uuid": "8e0b0f6b-31e9-4bba-a453-cf77077efd1f", 00:18:17.945 "no_auto_visible": false 00:18:17.945 } 00:18:17.945 } 00:18:17.945 }, 00:18:17.945 { 00:18:17.945 "method": "nvmf_subsystem_add_listener", 00:18:17.945 "params": { 00:18:17.945 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.945 "listen_address": { 00:18:17.945 "trtype": "TCP", 00:18:17.945 "adrfam": "IPv4", 00:18:17.945 "traddr": "10.0.0.2", 00:18:17.945 "trsvcid": "4420" 00:18:17.945 }, 00:18:17.945 "secure_channel": true 00:18:17.945 } 00:18:17.945 } 00:18:17.945 ] 00:18:17.945 } 00:18:17.945 ] 00:18:17.945 }' 00:18:17.945 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:17.945 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # bdevperfconf='{ 00:18:17.945 "subsystems": [ 00:18:17.945 { 00:18:17.945 "subsystem": "keyring", 00:18:17.945 "config": [ 00:18:17.945 { 00:18:17.945 "method": "keyring_file_add_key", 00:18:17.945 "params": { 00:18:17.945 "name": "key0", 00:18:17.945 "path": "/tmp/tmp.90f9xnQ4x1" 00:18:17.945 } 00:18:17.945 } 00:18:17.945 ] 00:18:17.945 }, 00:18:17.945 { 00:18:17.945 "subsystem": "iobuf", 00:18:17.945 "config": [ 00:18:17.945 { 00:18:17.945 "method": "iobuf_set_options", 00:18:17.945 "params": { 00:18:17.945 "small_pool_count": 8192, 00:18:17.945 "large_pool_count": 1024, 00:18:17.945 "small_bufsize": 8192, 00:18:17.945 "large_bufsize": 135168, 00:18:17.945 "enable_numa": false 00:18:17.945 } 00:18:17.945 } 00:18:17.945 ] 00:18:17.945 }, 00:18:17.945 { 00:18:17.945 "subsystem": "sock", 00:18:17.945 "config": [ 00:18:17.945 { 00:18:17.945 "method": "sock_set_default_impl", 00:18:17.945 "params": { 00:18:17.945 "impl_name": "posix" 00:18:17.945 } 00:18:17.945 }, 00:18:17.945 { 00:18:17.945 "method": "sock_impl_set_options", 00:18:17.945 "params": { 00:18:17.945 "impl_name": "ssl", 00:18:17.945 "recv_buf_size": 4096, 00:18:17.945 "send_buf_size": 4096, 00:18:17.945 "enable_recv_pipe": true, 00:18:17.946 "enable_quickack": false, 00:18:17.946 "enable_placement_id": 0, 00:18:17.946 "enable_zerocopy_send_server": true, 00:18:17.946 "enable_zerocopy_send_client": false, 00:18:17.946 "zerocopy_threshold": 0, 00:18:17.946 "tls_version": 0, 00:18:17.946 "enable_ktls": false 00:18:17.946 } 00:18:17.946 }, 00:18:17.946 { 00:18:17.946 "method": "sock_impl_set_options", 00:18:17.946 "params": { 00:18:17.946 "impl_name": "posix", 00:18:17.946 "recv_buf_size": 2097152, 00:18:17.946 "send_buf_size": 2097152, 00:18:17.946 "enable_recv_pipe": true, 00:18:17.946 "enable_quickack": false, 00:18:17.946 "enable_placement_id": 0, 00:18:17.946 "enable_zerocopy_send_server": true, 00:18:17.946 "enable_zerocopy_send_client": false, 00:18:17.946 "zerocopy_threshold": 0, 00:18:17.946 "tls_version": 0, 00:18:17.946 "enable_ktls": false 00:18:17.946 } 00:18:17.946 } 00:18:17.946 ] 00:18:17.946 }, 00:18:17.946 { 00:18:17.946 "subsystem": "vmd", 00:18:17.946 "config": [] 00:18:17.946 }, 00:18:17.946 { 00:18:17.946 "subsystem": "accel", 00:18:17.946 "config": [ 00:18:17.946 { 00:18:17.946 "method": "accel_set_options", 00:18:17.946 "params": { 00:18:17.946 "small_cache_size": 128, 00:18:17.946 "large_cache_size": 16, 00:18:17.946 "task_count": 2048, 00:18:17.946 "sequence_count": 2048, 00:18:17.946 "buf_count": 2048 00:18:17.946 } 00:18:17.946 } 00:18:17.946 ] 00:18:17.946 }, 00:18:17.946 { 00:18:17.946 "subsystem": "bdev", 00:18:17.946 "config": [ 00:18:17.946 { 00:18:17.946 "method": "bdev_set_options", 00:18:17.946 "params": { 00:18:17.946 "bdev_io_pool_size": 65535, 00:18:17.946 "bdev_io_cache_size": 256, 00:18:17.946 "bdev_auto_examine": true, 00:18:17.946 "iobuf_small_cache_size": 128, 00:18:17.946 "iobuf_large_cache_size": 16 00:18:17.946 } 00:18:17.946 }, 00:18:17.946 { 00:18:17.946 "method": "bdev_raid_set_options", 00:18:17.946 "params": { 00:18:17.946 "process_window_size_kb": 1024, 00:18:17.946 "process_max_bandwidth_mb_sec": 0 00:18:17.946 } 00:18:17.946 }, 00:18:17.946 { 00:18:17.946 "method": "bdev_iscsi_set_options", 00:18:17.946 "params": { 00:18:17.946 "timeout_sec": 30 00:18:17.946 } 00:18:17.946 }, 00:18:17.946 { 00:18:17.946 "method": "bdev_nvme_set_options", 00:18:17.946 "params": { 00:18:17.946 "action_on_timeout": "none", 00:18:17.946 "timeout_us": 0, 00:18:17.946 "timeout_admin_us": 0, 00:18:17.946 "keep_alive_timeout_ms": 10000, 00:18:17.946 "arbitration_burst": 0, 00:18:17.946 "low_priority_weight": 0, 00:18:17.946 "medium_priority_weight": 0, 00:18:17.946 "high_priority_weight": 0, 00:18:17.946 "nvme_adminq_poll_period_us": 10000, 00:18:17.946 "nvme_ioq_poll_period_us": 0, 00:18:17.946 "io_queue_requests": 512, 00:18:17.946 "delay_cmd_submit": true, 00:18:17.946 "transport_retry_count": 4, 00:18:17.946 "bdev_retry_count": 3, 00:18:17.946 "transport_ack_timeout": 0, 00:18:17.946 "ctrlr_loss_timeout_sec": 0, 00:18:17.946 "reconnect_delay_sec": 0, 00:18:17.946 "fast_io_fail_timeout_sec": 0, 00:18:17.946 "disable_auto_failback": false, 00:18:17.946 "generate_uuids": false, 00:18:17.946 "transport_tos": 0, 00:18:17.946 "nvme_error_stat": false, 00:18:17.946 "rdma_srq_size": 0, 00:18:17.946 "io_path_stat": false, 00:18:17.946 "allow_accel_sequence": false, 00:18:17.946 "rdma_max_cq_size": 0, 00:18:17.946 "rdma_cm_event_timeout_ms": 0, 00:18:17.946 "dhchap_digests": [ 00:18:17.946 "sha256", 00:18:17.946 "sha384", 00:18:17.946 "sha512" 00:18:17.946 ], 00:18:17.946 "dhchap_dhgroups": [ 00:18:17.946 "null", 00:18:17.946 "ffdhe2048", 00:18:17.946 "ffdhe3072", 00:18:17.946 "ffdhe4096", 00:18:17.946 "ffdhe6144", 00:18:17.946 "ffdhe8192" 00:18:17.946 ] 00:18:17.946 } 00:18:17.946 }, 00:18:17.946 { 00:18:17.946 "method": "bdev_nvme_attach_controller", 00:18:17.946 "params": { 00:18:17.946 "name": "TLSTEST", 00:18:17.946 "trtype": "TCP", 00:18:17.946 "adrfam": "IPv4", 00:18:17.946 "traddr": "10.0.0.2", 00:18:17.946 "trsvcid": "4420", 00:18:17.946 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:17.946 "prchk_reftag": false, 00:18:17.946 "prchk_guard": false, 00:18:17.946 "ctrlr_loss_timeout_sec": 0, 00:18:17.946 "reconnect_delay_sec": 0, 00:18:17.946 "fast_io_fail_timeout_sec": 0, 00:18:17.946 "psk": "key0", 00:18:17.946 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:17.946 "hdgst": false, 00:18:17.946 "ddgst": false, 00:18:17.946 "multipath": "multipath" 00:18:17.946 } 00:18:17.946 }, 00:18:17.946 { 00:18:17.946 "method": "bdev_nvme_set_hotplug", 00:18:17.946 "params": { 00:18:17.946 "period_us": 100000, 00:18:17.946 "enable": false 00:18:17.946 } 00:18:17.946 }, 00:18:17.946 { 00:18:17.946 "method": "bdev_wait_for_examine" 00:18:17.946 } 00:18:17.946 ] 00:18:17.946 }, 00:18:17.946 { 00:18:17.946 "subsystem": "nbd", 00:18:17.946 "config": [] 00:18:17.946 } 00:18:17.946 ] 00:18:17.946 }' 00:18:17.946 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # killprocess 2356004 00:18:17.946 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2356004 ']' 00:18:17.946 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2356004 00:18:17.946 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:17.946 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:17.946 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2356004 00:18:18.205 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:18.205 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:18.205 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2356004' 00:18:18.205 killing process with pid 2356004 00:18:18.205 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2356004 00:18:18.206 Received shutdown signal, test time was about 10.000000 seconds 00:18:18.206 00:18:18.206 Latency(us) 00:18:18.206 [2024-11-20T08:02:34.247Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:18.206 [2024-11-20T08:02:34.247Z] =================================================================================================================== 00:18:18.206 [2024-11-20T08:02:34.247Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:18.206 09:02:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2356004 00:18:18.206 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # killprocess 2355742 00:18:18.206 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2355742 ']' 00:18:18.206 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2355742 00:18:18.206 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:18.206 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:18.206 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2355742 00:18:18.206 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:18.206 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:18.206 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2355742' 00:18:18.206 killing process with pid 2355742 00:18:18.206 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2355742 00:18:18.206 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2355742 00:18:18.465 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:18.465 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:18:18.465 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:18.465 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.465 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # echo '{ 00:18:18.465 "subsystems": [ 00:18:18.465 { 00:18:18.465 "subsystem": "keyring", 00:18:18.465 "config": [ 00:18:18.465 { 00:18:18.465 "method": "keyring_file_add_key", 00:18:18.465 "params": { 00:18:18.465 "name": "key0", 00:18:18.465 "path": "/tmp/tmp.90f9xnQ4x1" 00:18:18.465 } 00:18:18.465 } 00:18:18.465 ] 00:18:18.465 }, 00:18:18.465 { 00:18:18.465 "subsystem": "iobuf", 00:18:18.465 "config": [ 00:18:18.465 { 00:18:18.465 "method": "iobuf_set_options", 00:18:18.465 "params": { 00:18:18.465 "small_pool_count": 8192, 00:18:18.465 "large_pool_count": 1024, 00:18:18.465 "small_bufsize": 8192, 00:18:18.465 "large_bufsize": 135168, 00:18:18.465 "enable_numa": false 00:18:18.465 } 00:18:18.465 } 00:18:18.465 ] 00:18:18.465 }, 00:18:18.465 { 00:18:18.465 "subsystem": "sock", 00:18:18.465 "config": [ 00:18:18.465 { 00:18:18.465 "method": "sock_set_default_impl", 00:18:18.465 "params": { 00:18:18.465 "impl_name": "posix" 00:18:18.466 } 00:18:18.466 }, 00:18:18.466 { 00:18:18.466 "method": "sock_impl_set_options", 00:18:18.466 "params": { 00:18:18.466 "impl_name": "ssl", 00:18:18.466 "recv_buf_size": 4096, 00:18:18.466 "send_buf_size": 4096, 00:18:18.466 "enable_recv_pipe": true, 00:18:18.466 "enable_quickack": false, 00:18:18.466 "enable_placement_id": 0, 00:18:18.466 "enable_zerocopy_send_server": true, 00:18:18.466 "enable_zerocopy_send_client": false, 00:18:18.466 "zerocopy_threshold": 0, 00:18:18.466 "tls_version": 0, 00:18:18.466 "enable_ktls": false 00:18:18.466 } 00:18:18.466 }, 00:18:18.466 { 00:18:18.466 "method": "sock_impl_set_options", 00:18:18.466 "params": { 00:18:18.466 "impl_name": "posix", 00:18:18.466 "recv_buf_size": 2097152, 00:18:18.466 "send_buf_size": 2097152, 00:18:18.466 "enable_recv_pipe": true, 00:18:18.466 "enable_quickack": false, 00:18:18.466 "enable_placement_id": 0, 00:18:18.466 "enable_zerocopy_send_server": true, 00:18:18.466 "enable_zerocopy_send_client": false, 00:18:18.466 "zerocopy_threshold": 0, 00:18:18.466 "tls_version": 0, 00:18:18.466 "enable_ktls": false 00:18:18.466 } 00:18:18.466 } 00:18:18.466 ] 00:18:18.466 }, 00:18:18.466 { 00:18:18.466 "subsystem": "vmd", 00:18:18.466 "config": [] 00:18:18.466 }, 00:18:18.466 { 00:18:18.466 "subsystem": "accel", 00:18:18.466 "config": [ 00:18:18.466 { 00:18:18.466 "method": "accel_set_options", 00:18:18.466 "params": { 00:18:18.466 "small_cache_size": 128, 00:18:18.466 "large_cache_size": 16, 00:18:18.466 "task_count": 2048, 00:18:18.466 "sequence_count": 2048, 00:18:18.466 "buf_count": 2048 00:18:18.466 } 00:18:18.466 } 00:18:18.466 ] 00:18:18.466 }, 00:18:18.466 { 00:18:18.466 "subsystem": "bdev", 00:18:18.466 "config": [ 00:18:18.466 { 00:18:18.466 "method": "bdev_set_options", 00:18:18.466 "params": { 00:18:18.466 "bdev_io_pool_size": 65535, 00:18:18.466 "bdev_io_cache_size": 256, 00:18:18.466 "bdev_auto_examine": true, 00:18:18.466 "iobuf_small_cache_size": 128, 00:18:18.466 "iobuf_large_cache_size": 16 00:18:18.466 } 00:18:18.466 }, 00:18:18.466 { 00:18:18.466 "method": "bdev_raid_set_options", 00:18:18.466 "params": { 00:18:18.466 "process_window_size_kb": 1024, 00:18:18.466 "process_max_bandwidth_mb_sec": 0 00:18:18.466 } 00:18:18.466 }, 00:18:18.466 { 00:18:18.466 "method": "bdev_iscsi_set_options", 00:18:18.466 "params": { 00:18:18.466 "timeout_sec": 30 00:18:18.466 } 00:18:18.466 }, 00:18:18.466 { 00:18:18.466 "method": "bdev_nvme_set_options", 00:18:18.466 "params": { 00:18:18.466 "action_on_timeout": "none", 00:18:18.466 "timeout_us": 0, 00:18:18.466 "timeout_admin_us": 0, 00:18:18.466 "keep_alive_timeout_ms": 10000, 00:18:18.466 "arbitration_burst": 0, 00:18:18.466 "low_priority_weight": 0, 00:18:18.466 "medium_priority_weight": 0, 00:18:18.466 "high_priority_weight": 0, 00:18:18.466 "nvme_adminq_poll_period_us": 10000, 00:18:18.466 "nvme_ioq_poll_period_us": 0, 00:18:18.466 "io_queue_requests": 0, 00:18:18.466 "delay_cmd_submit": true, 00:18:18.466 "transport_retry_count": 4, 00:18:18.466 "bdev_retry_count": 3, 00:18:18.466 "transport_ack_timeout": 0, 00:18:18.466 "ctrlr_loss_timeout_sec": 0, 00:18:18.466 "reconnect_delay_sec": 0, 00:18:18.466 "fast_io_fail_timeout_sec": 0, 00:18:18.466 "disable_auto_failback": false, 00:18:18.466 "generate_uuids": false, 00:18:18.466 "transport_tos": 0, 00:18:18.466 "nvme_error_stat": false, 00:18:18.466 "rdma_srq_size": 0, 00:18:18.466 "io_path_stat": false, 00:18:18.466 "allow_accel_sequence": false, 00:18:18.466 "rdma_max_cq_size": 0, 00:18:18.466 "rdma_cm_event_timeout_ms": 0, 00:18:18.466 "dhchap_digests": [ 00:18:18.466 "sha256", 00:18:18.466 "sha384", 00:18:18.466 "sha512" 00:18:18.466 ], 00:18:18.466 "dhchap_dhgroups": [ 00:18:18.466 "null", 00:18:18.466 "ffdhe2048", 00:18:18.466 "ffdhe3072", 00:18:18.466 "ffdhe4096", 00:18:18.466 "ffdhe6144", 00:18:18.466 "ffdhe8192" 00:18:18.466 ] 00:18:18.466 } 00:18:18.466 }, 00:18:18.466 { 00:18:18.466 "method": "bdev_nvme_set_hotplug", 00:18:18.466 "params": { 00:18:18.466 "period_us": 100000, 00:18:18.466 "enable": false 00:18:18.466 } 00:18:18.466 }, 00:18:18.466 { 00:18:18.466 "method": "bdev_malloc_create", 00:18:18.466 "params": { 00:18:18.466 "name": "malloc0", 00:18:18.466 "num_blocks": 8192, 00:18:18.466 "block_size": 4096, 00:18:18.466 "physical_block_size": 4096, 00:18:18.466 "uuid": "8e0b0f6b-31e9-4bba-a453-cf77077efd1f", 00:18:18.466 "optimal_io_boundary": 0, 00:18:18.466 "md_size": 0, 00:18:18.466 "dif_type": 0, 00:18:18.466 "dif_is_head_of_md": false, 00:18:18.466 "dif_pi_format": 0 00:18:18.466 } 00:18:18.466 }, 00:18:18.466 { 00:18:18.466 "method": "bdev_wait_for_examine" 00:18:18.466 } 00:18:18.466 ] 00:18:18.466 }, 00:18:18.466 { 00:18:18.466 "subsystem": "nbd", 00:18:18.466 "config": [] 00:18:18.466 }, 00:18:18.466 { 00:18:18.466 "subsystem": "scheduler", 00:18:18.466 "config": [ 00:18:18.466 { 00:18:18.466 "method": "framework_set_scheduler", 00:18:18.466 "params": { 00:18:18.466 "name": "static" 00:18:18.467 } 00:18:18.467 } 00:18:18.467 ] 00:18:18.467 }, 00:18:18.467 { 00:18:18.467 "subsystem": "nvmf", 00:18:18.467 "config": [ 00:18:18.467 { 00:18:18.467 "method": "nvmf_set_config", 00:18:18.467 "params": { 00:18:18.467 "discovery_filter": "match_any", 00:18:18.467 "admin_cmd_passthru": { 00:18:18.467 "identify_ctrlr": false 00:18:18.467 }, 00:18:18.467 "dhchap_digests": [ 00:18:18.467 "sha256", 00:18:18.467 "sha384", 00:18:18.467 "sha512" 00:18:18.467 ], 00:18:18.467 "dhchap_dhgroups": [ 00:18:18.467 "null", 00:18:18.467 "ffdhe2048", 00:18:18.467 "ffdhe3072", 00:18:18.467 "ffdhe4096", 00:18:18.467 "ffdhe6144", 00:18:18.467 "ffdhe8192" 00:18:18.467 ] 00:18:18.467 } 00:18:18.467 }, 00:18:18.467 { 00:18:18.467 "method": "nvmf_set_max_subsystems", 00:18:18.467 "params": { 00:18:18.467 "max_subsystems": 1024 00:18:18.467 } 00:18:18.467 }, 00:18:18.467 { 00:18:18.467 "method": "nvmf_set_crdt", 00:18:18.467 "params": { 00:18:18.467 "crdt1": 0, 00:18:18.467 "crdt2": 0, 00:18:18.467 "crdt3": 0 00:18:18.467 } 00:18:18.467 }, 00:18:18.467 { 00:18:18.467 "method": "nvmf_create_transport", 00:18:18.467 "params": { 00:18:18.467 "trtype": "TCP", 00:18:18.467 "max_queue_depth": 128, 00:18:18.467 "max_io_qpairs_per_ctrlr": 127, 00:18:18.467 "in_capsule_data_size": 4096, 00:18:18.467 "max_io_size": 131072, 00:18:18.467 "io_unit_size": 131072, 00:18:18.467 "max_aq_depth": 128, 00:18:18.467 "num_shared_buffers": 511, 00:18:18.467 "buf_cache_size": 4294967295, 00:18:18.467 "dif_insert_or_strip": false, 00:18:18.467 "zcopy": false, 00:18:18.467 "c2h_success": false, 00:18:18.467 "sock_priority": 0, 00:18:18.467 "abort_timeout_sec": 1, 00:18:18.467 "ack_timeout": 0, 00:18:18.467 "data_wr_pool_size": 0 00:18:18.467 } 00:18:18.467 }, 00:18:18.467 { 00:18:18.467 "method": "nvmf_create_subsystem", 00:18:18.467 "params": { 00:18:18.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.467 "allow_any_host": false, 00:18:18.467 "serial_number": "SPDK00000000000001", 00:18:18.467 "model_number": "SPDK bdev Controller", 00:18:18.467 "max_namespaces": 10, 00:18:18.467 "min_cntlid": 1, 00:18:18.467 "max_cntlid": 65519, 00:18:18.467 "ana_reporting": false 00:18:18.467 } 00:18:18.467 }, 00:18:18.467 { 00:18:18.467 "method": "nvmf_subsystem_add_host", 00:18:18.467 "params": { 00:18:18.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.467 "host": "nqn.2016-06.io.spdk:host1", 00:18:18.467 "psk": "key0" 00:18:18.467 } 00:18:18.467 }, 00:18:18.467 { 00:18:18.467 "method": "nvmf_subsystem_add_ns", 00:18:18.467 "params": { 00:18:18.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.467 "namespace": { 00:18:18.467 "nsid": 1, 00:18:18.467 "bdev_name": "malloc0", 00:18:18.467 "nguid": "8E0B0F6B31E94BBAA453CF77077EFD1F", 00:18:18.467 "uuid": "8e0b0f6b-31e9-4bba-a453-cf77077efd1f", 00:18:18.467 "no_auto_visible": false 00:18:18.467 } 00:18:18.467 } 00:18:18.467 }, 00:18:18.467 { 00:18:18.467 "method": "nvmf_subsystem_add_listener", 00:18:18.467 "params": { 00:18:18.467 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:18.467 "listen_address": { 00:18:18.467 "trtype": "TCP", 00:18:18.467 "adrfam": "IPv4", 00:18:18.467 "traddr": "10.0.0.2", 00:18:18.467 "trsvcid": "4420" 00:18:18.467 }, 00:18:18.467 "secure_channel": true 00:18:18.467 } 00:18:18.467 } 00:18:18.467 ] 00:18:18.467 } 00:18:18.467 ] 00:18:18.467 }' 00:18:18.467 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=2356377 00:18:18.467 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:18.467 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 2356377 00:18:18.467 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2356377 ']' 00:18:18.467 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:18.467 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:18.467 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:18.467 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:18.467 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:18.467 09:02:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.467 [2024-11-20 09:02:34.424392] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:18:18.467 [2024-11-20 09:02:34.424441] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:18.727 [2024-11-20 09:02:34.505150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:18.727 [2024-11-20 09:02:34.546017] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:18.727 [2024-11-20 09:02:34.546051] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:18.727 [2024-11-20 09:02:34.546059] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:18.727 [2024-11-20 09:02:34.546065] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:18.727 [2024-11-20 09:02:34.546070] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:18.727 [2024-11-20 09:02:34.546662] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:18.727 [2024-11-20 09:02:34.757818] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.986 [2024-11-20 09:02:34.789854] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:18.986 [2024-11-20 09:02:34.790054] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:19.245 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:19.245 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:19.245 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:18:19.245 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:19.245 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.505 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:19.505 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # bdevperf_pid=2356499 00:18:19.505 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # waitforlisten 2356499 /var/tmp/bdevperf.sock 00:18:19.505 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2356499 ']' 00:18:19.505 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:19.505 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:19.505 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:19.505 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:19.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:19.505 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # echo '{ 00:18:19.505 "subsystems": [ 00:18:19.505 { 00:18:19.505 "subsystem": "keyring", 00:18:19.505 "config": [ 00:18:19.505 { 00:18:19.505 "method": "keyring_file_add_key", 00:18:19.505 "params": { 00:18:19.505 "name": "key0", 00:18:19.505 "path": "/tmp/tmp.90f9xnQ4x1" 00:18:19.505 } 00:18:19.505 } 00:18:19.505 ] 00:18:19.505 }, 00:18:19.505 { 00:18:19.505 "subsystem": "iobuf", 00:18:19.505 "config": [ 00:18:19.505 { 00:18:19.505 "method": "iobuf_set_options", 00:18:19.505 "params": { 00:18:19.505 "small_pool_count": 8192, 00:18:19.505 "large_pool_count": 1024, 00:18:19.505 "small_bufsize": 8192, 00:18:19.505 "large_bufsize": 135168, 00:18:19.505 "enable_numa": false 00:18:19.505 } 00:18:19.505 } 00:18:19.505 ] 00:18:19.505 }, 00:18:19.505 { 00:18:19.505 "subsystem": "sock", 00:18:19.505 "config": [ 00:18:19.505 { 00:18:19.505 "method": "sock_set_default_impl", 00:18:19.505 "params": { 00:18:19.505 "impl_name": "posix" 00:18:19.505 } 00:18:19.505 }, 00:18:19.505 { 00:18:19.505 "method": "sock_impl_set_options", 00:18:19.505 "params": { 00:18:19.505 "impl_name": "ssl", 00:18:19.505 "recv_buf_size": 4096, 00:18:19.505 "send_buf_size": 4096, 00:18:19.505 "enable_recv_pipe": true, 00:18:19.505 "enable_quickack": false, 00:18:19.505 "enable_placement_id": 0, 00:18:19.505 "enable_zerocopy_send_server": true, 00:18:19.505 "enable_zerocopy_send_client": false, 00:18:19.505 "zerocopy_threshold": 0, 00:18:19.505 "tls_version": 0, 00:18:19.505 "enable_ktls": false 00:18:19.505 } 00:18:19.505 }, 00:18:19.505 { 00:18:19.505 "method": "sock_impl_set_options", 00:18:19.505 "params": { 00:18:19.505 "impl_name": "posix", 00:18:19.505 "recv_buf_size": 2097152, 00:18:19.505 "send_buf_size": 2097152, 00:18:19.505 "enable_recv_pipe": true, 00:18:19.505 "enable_quickack": false, 00:18:19.505 "enable_placement_id": 0, 00:18:19.505 "enable_zerocopy_send_server": true, 00:18:19.505 "enable_zerocopy_send_client": false, 00:18:19.505 "zerocopy_threshold": 0, 00:18:19.505 "tls_version": 0, 00:18:19.505 "enable_ktls": false 00:18:19.505 } 00:18:19.505 } 00:18:19.505 ] 00:18:19.505 }, 00:18:19.505 { 00:18:19.505 "subsystem": "vmd", 00:18:19.505 "config": [] 00:18:19.505 }, 00:18:19.505 { 00:18:19.505 "subsystem": "accel", 00:18:19.505 "config": [ 00:18:19.505 { 00:18:19.505 "method": "accel_set_options", 00:18:19.505 "params": { 00:18:19.505 "small_cache_size": 128, 00:18:19.505 "large_cache_size": 16, 00:18:19.505 "task_count": 2048, 00:18:19.505 "sequence_count": 2048, 00:18:19.505 "buf_count": 2048 00:18:19.505 } 00:18:19.505 } 00:18:19.505 ] 00:18:19.505 }, 00:18:19.505 { 00:18:19.505 "subsystem": "bdev", 00:18:19.505 "config": [ 00:18:19.505 { 00:18:19.505 "method": "bdev_set_options", 00:18:19.505 "params": { 00:18:19.505 "bdev_io_pool_size": 65535, 00:18:19.505 "bdev_io_cache_size": 256, 00:18:19.505 "bdev_auto_examine": true, 00:18:19.505 "iobuf_small_cache_size": 128, 00:18:19.505 "iobuf_large_cache_size": 16 00:18:19.505 } 00:18:19.505 }, 00:18:19.505 { 00:18:19.505 "method": "bdev_raid_set_options", 00:18:19.505 "params": { 00:18:19.505 "process_window_size_kb": 1024, 00:18:19.505 "process_max_bandwidth_mb_sec": 0 00:18:19.505 } 00:18:19.505 }, 00:18:19.505 { 00:18:19.505 "method": "bdev_iscsi_set_options", 00:18:19.505 "params": { 00:18:19.505 "timeout_sec": 30 00:18:19.505 } 00:18:19.505 }, 00:18:19.505 { 00:18:19.505 "method": "bdev_nvme_set_options", 00:18:19.505 "params": { 00:18:19.505 "action_on_timeout": "none", 00:18:19.505 "timeout_us": 0, 00:18:19.505 "timeout_admin_us": 0, 00:18:19.505 "keep_alive_timeout_ms": 10000, 00:18:19.505 "arbitration_burst": 0, 00:18:19.505 "low_priority_weight": 0, 00:18:19.505 "medium_priority_weight": 0, 00:18:19.505 "high_priority_weight": 0, 00:18:19.505 "nvme_adminq_poll_period_us": 10000, 00:18:19.505 "nvme_ioq_poll_period_us": 0, 00:18:19.505 "io_queue_requests": 512, 00:18:19.505 "delay_cmd_submit": true, 00:18:19.505 "transport_retry_count": 4, 00:18:19.505 "bdev_retry_count": 3, 00:18:19.505 "transport_ack_timeout": 0, 00:18:19.505 "ctrlr_loss_timeout_sec": 0, 00:18:19.505 "reconnect_delay_sec": 0, 00:18:19.505 "fast_io_fail_timeout_sec": 0, 00:18:19.505 "disable_auto_failback": false, 00:18:19.505 "generate_uuids": false, 00:18:19.505 "transport_tos": 0, 00:18:19.505 "nvme_error_stat": false, 00:18:19.505 "rdma_srq_size": 0, 00:18:19.505 "io_path_stat": false, 00:18:19.505 "allow_accel_sequence": false, 00:18:19.505 "rdma_max_cq_size": 0, 00:18:19.505 "rdma_cm_event_timeout_ms": 0, 00:18:19.505 "dhchap_digests": [ 00:18:19.505 "sha256", 00:18:19.505 "sha384", 00:18:19.505 "sha512" 00:18:19.505 ], 00:18:19.505 "dhchap_dhgroups": [ 00:18:19.505 "null", 00:18:19.505 "ffdhe2048", 00:18:19.505 "ffdhe3072", 00:18:19.505 "ffdhe4096", 00:18:19.505 "ffdhe6144", 00:18:19.505 "ffdhe8192" 00:18:19.505 ] 00:18:19.505 } 00:18:19.505 }, 00:18:19.505 { 00:18:19.505 "method": "bdev_nvme_attach_controller", 00:18:19.505 "params": { 00:18:19.505 "name": "TLSTEST", 00:18:19.505 "trtype": "TCP", 00:18:19.505 "adrfam": "IPv4", 00:18:19.505 "traddr": "10.0.0.2", 00:18:19.505 "trsvcid": "4420", 00:18:19.505 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:19.505 "prchk_reftag": false, 00:18:19.505 "prchk_guard": false, 00:18:19.505 "ctrlr_loss_timeout_sec": 0, 00:18:19.505 "reconnect_delay_sec": 0, 00:18:19.505 "fast_io_fail_timeout_sec": 0, 00:18:19.505 "psk": "key0", 00:18:19.505 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:19.505 "hdgst": false, 00:18:19.505 "ddgst": false, 00:18:19.505 "multipath": "multipath" 00:18:19.505 } 00:18:19.505 }, 00:18:19.505 { 00:18:19.505 "method": "bdev_nvme_set_hotplug", 00:18:19.505 "params": { 00:18:19.505 "period_us": 100000, 00:18:19.505 "enable": false 00:18:19.505 } 00:18:19.505 }, 00:18:19.505 { 00:18:19.505 "method": "bdev_wait_for_examine" 00:18:19.505 } 00:18:19.505 ] 00:18:19.505 }, 00:18:19.505 { 00:18:19.505 "subsystem": "nbd", 00:18:19.506 "config": [] 00:18:19.506 } 00:18:19.506 ] 00:18:19.506 }' 00:18:19.506 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:19.506 09:02:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.506 [2024-11-20 09:02:35.357063] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:18:19.506 [2024-11-20 09:02:35.357112] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2356499 ] 00:18:19.506 [2024-11-20 09:02:35.432787] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.506 [2024-11-20 09:02:35.473371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:19.764 [2024-11-20 09:02:35.626570] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:20.333 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:20.333 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:20.333 09:02:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:20.333 Running I/O for 10 seconds... 00:18:22.649 5445.00 IOPS, 21.27 MiB/s [2024-11-20T08:02:39.629Z] 5237.00 IOPS, 20.46 MiB/s [2024-11-20T08:02:40.567Z] 5207.00 IOPS, 20.34 MiB/s [2024-11-20T08:02:41.504Z] 5180.75 IOPS, 20.24 MiB/s [2024-11-20T08:02:42.443Z] 5135.40 IOPS, 20.06 MiB/s [2024-11-20T08:02:43.380Z] 5116.83 IOPS, 19.99 MiB/s [2024-11-20T08:02:44.317Z] 5112.00 IOPS, 19.97 MiB/s [2024-11-20T08:02:45.693Z] 5118.75 IOPS, 20.00 MiB/s [2024-11-20T08:02:46.627Z] 5121.67 IOPS, 20.01 MiB/s [2024-11-20T08:02:46.627Z] 5126.40 IOPS, 20.02 MiB/s 00:18:30.586 Latency(us) 00:18:30.586 [2024-11-20T08:02:46.627Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.586 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:30.586 Verification LBA range: start 0x0 length 0x2000 00:18:30.586 TLSTESTn1 : 10.02 5130.10 20.04 0.00 0.00 24914.00 4815.47 33508.84 00:18:30.586 [2024-11-20T08:02:46.627Z] =================================================================================================================== 00:18:30.586 [2024-11-20T08:02:46.627Z] Total : 5130.10 20.04 0.00 0.00 24914.00 4815.47 33508.84 00:18:30.586 { 00:18:30.586 "results": [ 00:18:30.586 { 00:18:30.586 "job": "TLSTESTn1", 00:18:30.586 "core_mask": "0x4", 00:18:30.586 "workload": "verify", 00:18:30.586 "status": "finished", 00:18:30.586 "verify_range": { 00:18:30.586 "start": 0, 00:18:30.586 "length": 8192 00:18:30.586 }, 00:18:30.586 "queue_depth": 128, 00:18:30.586 "io_size": 4096, 00:18:30.586 "runtime": 10.017356, 00:18:30.586 "iops": 5130.096205026556, 00:18:30.586 "mibps": 20.039438300884985, 00:18:30.586 "io_failed": 0, 00:18:30.586 "io_timeout": 0, 00:18:30.586 "avg_latency_us": 24914.00273644847, 00:18:30.586 "min_latency_us": 4815.471304347826, 00:18:30.586 "max_latency_us": 33508.84173913043 00:18:30.586 } 00:18:30.586 ], 00:18:30.586 "core_count": 1 00:18:30.586 } 00:18:30.586 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:30.586 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # killprocess 2356499 00:18:30.586 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2356499 ']' 00:18:30.586 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2356499 00:18:30.586 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:30.586 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.586 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2356499 00:18:30.586 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:30.586 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:30.586 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2356499' 00:18:30.586 killing process with pid 2356499 00:18:30.586 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2356499 00:18:30.586 Received shutdown signal, test time was about 10.000000 seconds 00:18:30.586 00:18:30.586 Latency(us) 00:18:30.586 [2024-11-20T08:02:46.627Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:30.586 [2024-11-20T08:02:46.627Z] =================================================================================================================== 00:18:30.586 [2024-11-20T08:02:46.627Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:30.587 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2356499 00:18:30.587 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@212 -- # killprocess 2356377 00:18:30.587 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2356377 ']' 00:18:30.587 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2356377 00:18:30.587 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:30.587 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:30.587 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2356377 00:18:30.587 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:30.587 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:30.587 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2356377' 00:18:30.587 killing process with pid 2356377 00:18:30.587 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2356377 00:18:30.587 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2356377 00:18:30.846 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # nvmfappstart 00:18:30.846 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:18:30.846 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:30.846 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.846 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=2358342 00:18:30.846 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 2358342 00:18:30.846 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:30.846 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2358342 ']' 00:18:30.846 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:30.846 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:30.846 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:30.846 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:30.846 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:30.846 09:02:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:30.846 [2024-11-20 09:02:46.839466] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:18:30.846 [2024-11-20 09:02:46.839513] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:31.158 [2024-11-20 09:02:46.920032] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:31.158 [2024-11-20 09:02:46.958388] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:31.158 [2024-11-20 09:02:46.958424] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:31.158 [2024-11-20 09:02:46.958435] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:31.158 [2024-11-20 09:02:46.958441] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:31.158 [2024-11-20 09:02:46.958447] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:31.158 [2024-11-20 09:02:46.959024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:31.158 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:31.158 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:31.158 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:18:31.158 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:31.158 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:31.158 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:31.158 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # setup_nvmf_tgt /tmp/tmp.90f9xnQ4x1 00:18:31.158 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.90f9xnQ4x1 00:18:31.158 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:31.470 [2024-11-20 09:02:47.275354] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:31.470 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:31.771 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:31.771 [2024-11-20 09:02:47.668360] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:31.771 [2024-11-20 09:02:47.668572] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:31.771 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:32.031 malloc0 00:18:32.031 09:02:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:32.290 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.90f9xnQ4x1 00:18:32.290 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:32.548 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # bdevperf_pid=2358683 00:18:32.548 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:32.548 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:32.548 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # waitforlisten 2358683 /var/tmp/bdevperf.sock 00:18:32.548 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2358683 ']' 00:18:32.548 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:32.548 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:32.548 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:32.548 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:32.548 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:32.548 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.548 [2024-11-20 09:02:48.511147] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:18:32.548 [2024-11-20 09:02:48.511193] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2358683 ] 00:18:32.548 [2024-11-20 09:02:48.587437] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:32.807 [2024-11-20 09:02:48.628686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:32.807 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:32.807 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:32.807 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.90f9xnQ4x1 00:18:33.064 09:02:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:33.064 [2024-11-20 09:02:49.100732] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:33.322 nvme0n1 00:18:33.322 09:02:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:33.322 Running I/O for 1 seconds... 00:18:34.261 5133.00 IOPS, 20.05 MiB/s 00:18:34.261 Latency(us) 00:18:34.261 [2024-11-20T08:02:50.302Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.261 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:34.261 Verification LBA range: start 0x0 length 0x2000 00:18:34.262 nvme0n1 : 1.01 5187.29 20.26 0.00 0.00 24499.31 5841.25 25530.55 00:18:34.262 [2024-11-20T08:02:50.303Z] =================================================================================================================== 00:18:34.262 [2024-11-20T08:02:50.303Z] Total : 5187.29 20.26 0.00 0.00 24499.31 5841.25 25530.55 00:18:34.262 { 00:18:34.262 "results": [ 00:18:34.262 { 00:18:34.262 "job": "nvme0n1", 00:18:34.262 "core_mask": "0x2", 00:18:34.262 "workload": "verify", 00:18:34.262 "status": "finished", 00:18:34.262 "verify_range": { 00:18:34.262 "start": 0, 00:18:34.262 "length": 8192 00:18:34.262 }, 00:18:34.262 "queue_depth": 128, 00:18:34.262 "io_size": 4096, 00:18:34.262 "runtime": 1.014402, 00:18:34.262 "iops": 5187.292611804787, 00:18:34.262 "mibps": 20.26286176486245, 00:18:34.262 "io_failed": 0, 00:18:34.262 "io_timeout": 0, 00:18:34.262 "avg_latency_us": 24499.306862327103, 00:18:34.262 "min_latency_us": 5841.252173913043, 00:18:34.262 "max_latency_us": 25530.54608695652 00:18:34.262 } 00:18:34.262 ], 00:18:34.262 "core_count": 1 00:18:34.262 } 00:18:34.520 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@231 -- # killprocess 2358683 00:18:34.520 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2358683 ']' 00:18:34.520 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2358683 00:18:34.520 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:34.520 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:34.520 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2358683 00:18:34.520 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:34.520 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:34.520 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2358683' 00:18:34.520 killing process with pid 2358683 00:18:34.520 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2358683 00:18:34.520 Received shutdown signal, test time was about 1.000000 seconds 00:18:34.520 00:18:34.520 Latency(us) 00:18:34.520 [2024-11-20T08:02:50.561Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:34.520 [2024-11-20T08:02:50.562Z] =================================================================================================================== 00:18:34.521 [2024-11-20T08:02:50.562Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:34.521 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2358683 00:18:34.521 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # killprocess 2358342 00:18:34.521 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2358342 ']' 00:18:34.521 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2358342 00:18:34.521 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:34.521 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:34.521 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2358342 00:18:34.780 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:34.780 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:34.780 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2358342' 00:18:34.780 killing process with pid 2358342 00:18:34.780 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2358342 00:18:34.780 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2358342 00:18:34.780 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # nvmfappstart 00:18:34.780 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:18:34.780 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:34.780 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.780 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=2359075 00:18:34.780 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:34.780 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 2359075 00:18:34.780 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2359075 ']' 00:18:34.780 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.780 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:34.780 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.780 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.780 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:34.780 09:02:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:34.780 [2024-11-20 09:02:50.812875] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:18:34.780 [2024-11-20 09:02:50.812920] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:35.040 [2024-11-20 09:02:50.892142] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.040 [2024-11-20 09:02:50.932590] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:35.040 [2024-11-20 09:02:50.932631] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:35.040 [2024-11-20 09:02:50.932638] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:35.040 [2024-11-20 09:02:50.932644] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:35.040 [2024-11-20 09:02:50.932649] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:35.040 [2024-11-20 09:02:50.933257] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:35.040 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.040 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:35.040 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:18:35.040 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:35.040 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.040 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.040 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@238 -- # rpc_cmd 00:18:35.040 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:35.040 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.040 [2024-11-20 09:02:51.071457] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:35.299 malloc0 00:18:35.299 [2024-11-20 09:02:51.099614] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:35.299 [2024-11-20 09:02:51.099803] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:35.299 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:35.299 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@251 -- # bdevperf_pid=2359097 00:18:35.299 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@253 -- # waitforlisten 2359097 /var/tmp/bdevperf.sock 00:18:35.299 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@249 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:35.299 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2359097 ']' 00:18:35.299 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:35.299 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:35.299 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:35.299 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:35.299 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:35.299 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:35.299 [2024-11-20 09:02:51.173145] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:18:35.299 [2024-11-20 09:02:51.173189] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2359097 ] 00:18:35.299 [2024-11-20 09:02:51.247068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:35.299 [2024-11-20 09:02:51.289613] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.558 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:35.558 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:35.558 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.90f9xnQ4x1 00:18:35.558 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:35.817 [2024-11-20 09:02:51.741171] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:35.817 nvme0n1 00:18:35.817 09:02:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:36.077 Running I/O for 1 seconds... 00:18:37.014 5226.00 IOPS, 20.41 MiB/s 00:18:37.014 Latency(us) 00:18:37.014 [2024-11-20T08:02:53.055Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.014 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:37.014 Verification LBA range: start 0x0 length 0x2000 00:18:37.014 nvme0n1 : 1.01 5292.04 20.67 0.00 0.00 24045.83 5043.42 28379.94 00:18:37.014 [2024-11-20T08:02:53.055Z] =================================================================================================================== 00:18:37.014 [2024-11-20T08:02:53.055Z] Total : 5292.04 20.67 0.00 0.00 24045.83 5043.42 28379.94 00:18:37.014 { 00:18:37.014 "results": [ 00:18:37.014 { 00:18:37.014 "job": "nvme0n1", 00:18:37.014 "core_mask": "0x2", 00:18:37.014 "workload": "verify", 00:18:37.014 "status": "finished", 00:18:37.014 "verify_range": { 00:18:37.014 "start": 0, 00:18:37.014 "length": 8192 00:18:37.014 }, 00:18:37.014 "queue_depth": 128, 00:18:37.014 "io_size": 4096, 00:18:37.014 "runtime": 1.011898, 00:18:37.014 "iops": 5292.035363248075, 00:18:37.014 "mibps": 20.67201313768779, 00:18:37.014 "io_failed": 0, 00:18:37.014 "io_timeout": 0, 00:18:37.014 "avg_latency_us": 24045.82698266553, 00:18:37.014 "min_latency_us": 5043.422608695652, 00:18:37.014 "max_latency_us": 28379.93739130435 00:18:37.014 } 00:18:37.014 ], 00:18:37.014 "core_count": 1 00:18:37.014 } 00:18:37.014 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # rpc_cmd save_config 00:18:37.014 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:37.014 09:02:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.274 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:37.274 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # tgtcfg='{ 00:18:37.274 "subsystems": [ 00:18:37.274 { 00:18:37.274 "subsystem": "keyring", 00:18:37.274 "config": [ 00:18:37.274 { 00:18:37.274 "method": "keyring_file_add_key", 00:18:37.274 "params": { 00:18:37.274 "name": "key0", 00:18:37.274 "path": "/tmp/tmp.90f9xnQ4x1" 00:18:37.274 } 00:18:37.274 } 00:18:37.274 ] 00:18:37.274 }, 00:18:37.274 { 00:18:37.274 "subsystem": "iobuf", 00:18:37.274 "config": [ 00:18:37.274 { 00:18:37.274 "method": "iobuf_set_options", 00:18:37.274 "params": { 00:18:37.274 "small_pool_count": 8192, 00:18:37.274 "large_pool_count": 1024, 00:18:37.274 "small_bufsize": 8192, 00:18:37.274 "large_bufsize": 135168, 00:18:37.274 "enable_numa": false 00:18:37.274 } 00:18:37.274 } 00:18:37.274 ] 00:18:37.274 }, 00:18:37.274 { 00:18:37.274 "subsystem": "sock", 00:18:37.274 "config": [ 00:18:37.274 { 00:18:37.274 "method": "sock_set_default_impl", 00:18:37.274 "params": { 00:18:37.274 "impl_name": "posix" 00:18:37.274 } 00:18:37.274 }, 00:18:37.274 { 00:18:37.274 "method": "sock_impl_set_options", 00:18:37.274 "params": { 00:18:37.274 "impl_name": "ssl", 00:18:37.274 "recv_buf_size": 4096, 00:18:37.274 "send_buf_size": 4096, 00:18:37.274 "enable_recv_pipe": true, 00:18:37.274 "enable_quickack": false, 00:18:37.274 "enable_placement_id": 0, 00:18:37.274 "enable_zerocopy_send_server": true, 00:18:37.274 "enable_zerocopy_send_client": false, 00:18:37.274 "zerocopy_threshold": 0, 00:18:37.274 "tls_version": 0, 00:18:37.274 "enable_ktls": false 00:18:37.274 } 00:18:37.274 }, 00:18:37.274 { 00:18:37.274 "method": "sock_impl_set_options", 00:18:37.274 "params": { 00:18:37.274 "impl_name": "posix", 00:18:37.274 "recv_buf_size": 2097152, 00:18:37.274 "send_buf_size": 2097152, 00:18:37.274 "enable_recv_pipe": true, 00:18:37.274 "enable_quickack": false, 00:18:37.274 "enable_placement_id": 0, 00:18:37.274 "enable_zerocopy_send_server": true, 00:18:37.274 "enable_zerocopy_send_client": false, 00:18:37.274 "zerocopy_threshold": 0, 00:18:37.274 "tls_version": 0, 00:18:37.274 "enable_ktls": false 00:18:37.274 } 00:18:37.274 } 00:18:37.274 ] 00:18:37.274 }, 00:18:37.274 { 00:18:37.274 "subsystem": "vmd", 00:18:37.274 "config": [] 00:18:37.274 }, 00:18:37.274 { 00:18:37.275 "subsystem": "accel", 00:18:37.275 "config": [ 00:18:37.275 { 00:18:37.275 "method": "accel_set_options", 00:18:37.275 "params": { 00:18:37.275 "small_cache_size": 128, 00:18:37.275 "large_cache_size": 16, 00:18:37.275 "task_count": 2048, 00:18:37.275 "sequence_count": 2048, 00:18:37.275 "buf_count": 2048 00:18:37.275 } 00:18:37.275 } 00:18:37.275 ] 00:18:37.275 }, 00:18:37.275 { 00:18:37.275 "subsystem": "bdev", 00:18:37.275 "config": [ 00:18:37.275 { 00:18:37.275 "method": "bdev_set_options", 00:18:37.275 "params": { 00:18:37.275 "bdev_io_pool_size": 65535, 00:18:37.275 "bdev_io_cache_size": 256, 00:18:37.275 "bdev_auto_examine": true, 00:18:37.275 "iobuf_small_cache_size": 128, 00:18:37.275 "iobuf_large_cache_size": 16 00:18:37.275 } 00:18:37.275 }, 00:18:37.275 { 00:18:37.275 "method": "bdev_raid_set_options", 00:18:37.275 "params": { 00:18:37.275 "process_window_size_kb": 1024, 00:18:37.275 "process_max_bandwidth_mb_sec": 0 00:18:37.275 } 00:18:37.275 }, 00:18:37.275 { 00:18:37.275 "method": "bdev_iscsi_set_options", 00:18:37.275 "params": { 00:18:37.275 "timeout_sec": 30 00:18:37.275 } 00:18:37.275 }, 00:18:37.275 { 00:18:37.275 "method": "bdev_nvme_set_options", 00:18:37.275 "params": { 00:18:37.275 "action_on_timeout": "none", 00:18:37.275 "timeout_us": 0, 00:18:37.275 "timeout_admin_us": 0, 00:18:37.275 "keep_alive_timeout_ms": 10000, 00:18:37.275 "arbitration_burst": 0, 00:18:37.275 "low_priority_weight": 0, 00:18:37.275 "medium_priority_weight": 0, 00:18:37.275 "high_priority_weight": 0, 00:18:37.275 "nvme_adminq_poll_period_us": 10000, 00:18:37.275 "nvme_ioq_poll_period_us": 0, 00:18:37.275 "io_queue_requests": 0, 00:18:37.275 "delay_cmd_submit": true, 00:18:37.275 "transport_retry_count": 4, 00:18:37.275 "bdev_retry_count": 3, 00:18:37.275 "transport_ack_timeout": 0, 00:18:37.275 "ctrlr_loss_timeout_sec": 0, 00:18:37.275 "reconnect_delay_sec": 0, 00:18:37.275 "fast_io_fail_timeout_sec": 0, 00:18:37.275 "disable_auto_failback": false, 00:18:37.275 "generate_uuids": false, 00:18:37.275 "transport_tos": 0, 00:18:37.275 "nvme_error_stat": false, 00:18:37.275 "rdma_srq_size": 0, 00:18:37.275 "io_path_stat": false, 00:18:37.275 "allow_accel_sequence": false, 00:18:37.275 "rdma_max_cq_size": 0, 00:18:37.275 "rdma_cm_event_timeout_ms": 0, 00:18:37.275 "dhchap_digests": [ 00:18:37.275 "sha256", 00:18:37.275 "sha384", 00:18:37.275 "sha512" 00:18:37.275 ], 00:18:37.275 "dhchap_dhgroups": [ 00:18:37.275 "null", 00:18:37.275 "ffdhe2048", 00:18:37.275 "ffdhe3072", 00:18:37.275 "ffdhe4096", 00:18:37.275 "ffdhe6144", 00:18:37.275 "ffdhe8192" 00:18:37.275 ] 00:18:37.275 } 00:18:37.275 }, 00:18:37.275 { 00:18:37.275 "method": "bdev_nvme_set_hotplug", 00:18:37.275 "params": { 00:18:37.275 "period_us": 100000, 00:18:37.275 "enable": false 00:18:37.275 } 00:18:37.275 }, 00:18:37.275 { 00:18:37.275 "method": "bdev_malloc_create", 00:18:37.275 "params": { 00:18:37.275 "name": "malloc0", 00:18:37.275 "num_blocks": 8192, 00:18:37.275 "block_size": 4096, 00:18:37.275 "physical_block_size": 4096, 00:18:37.275 "uuid": "72968102-41d4-41cc-915c-140454bcec8b", 00:18:37.275 "optimal_io_boundary": 0, 00:18:37.275 "md_size": 0, 00:18:37.275 "dif_type": 0, 00:18:37.275 "dif_is_head_of_md": false, 00:18:37.275 "dif_pi_format": 0 00:18:37.275 } 00:18:37.275 }, 00:18:37.275 { 00:18:37.275 "method": "bdev_wait_for_examine" 00:18:37.275 } 00:18:37.275 ] 00:18:37.275 }, 00:18:37.275 { 00:18:37.275 "subsystem": "nbd", 00:18:37.275 "config": [] 00:18:37.275 }, 00:18:37.275 { 00:18:37.275 "subsystem": "scheduler", 00:18:37.275 "config": [ 00:18:37.275 { 00:18:37.275 "method": "framework_set_scheduler", 00:18:37.275 "params": { 00:18:37.275 "name": "static" 00:18:37.275 } 00:18:37.275 } 00:18:37.275 ] 00:18:37.275 }, 00:18:37.275 { 00:18:37.275 "subsystem": "nvmf", 00:18:37.275 "config": [ 00:18:37.275 { 00:18:37.275 "method": "nvmf_set_config", 00:18:37.275 "params": { 00:18:37.275 "discovery_filter": "match_any", 00:18:37.275 "admin_cmd_passthru": { 00:18:37.275 "identify_ctrlr": false 00:18:37.275 }, 00:18:37.275 "dhchap_digests": [ 00:18:37.275 "sha256", 00:18:37.275 "sha384", 00:18:37.275 "sha512" 00:18:37.275 ], 00:18:37.275 "dhchap_dhgroups": [ 00:18:37.275 "null", 00:18:37.275 "ffdhe2048", 00:18:37.275 "ffdhe3072", 00:18:37.275 "ffdhe4096", 00:18:37.275 "ffdhe6144", 00:18:37.275 "ffdhe8192" 00:18:37.275 ] 00:18:37.275 } 00:18:37.275 }, 00:18:37.275 { 00:18:37.275 "method": "nvmf_set_max_subsystems", 00:18:37.275 "params": { 00:18:37.275 "max_subsystems": 1024 00:18:37.275 } 00:18:37.275 }, 00:18:37.275 { 00:18:37.275 "method": "nvmf_set_crdt", 00:18:37.275 "params": { 00:18:37.275 "crdt1": 0, 00:18:37.275 "crdt2": 0, 00:18:37.275 "crdt3": 0 00:18:37.275 } 00:18:37.275 }, 00:18:37.275 { 00:18:37.275 "method": "nvmf_create_transport", 00:18:37.275 "params": { 00:18:37.275 "trtype": "TCP", 00:18:37.275 "max_queue_depth": 128, 00:18:37.275 "max_io_qpairs_per_ctrlr": 127, 00:18:37.275 "in_capsule_data_size": 4096, 00:18:37.275 "max_io_size": 131072, 00:18:37.275 "io_unit_size": 131072, 00:18:37.275 "max_aq_depth": 128, 00:18:37.275 "num_shared_buffers": 511, 00:18:37.275 "buf_cache_size": 4294967295, 00:18:37.275 "dif_insert_or_strip": false, 00:18:37.275 "zcopy": false, 00:18:37.275 "c2h_success": false, 00:18:37.275 "sock_priority": 0, 00:18:37.275 "abort_timeout_sec": 1, 00:18:37.275 "ack_timeout": 0, 00:18:37.275 "data_wr_pool_size": 0 00:18:37.275 } 00:18:37.275 }, 00:18:37.275 { 00:18:37.275 "method": "nvmf_create_subsystem", 00:18:37.275 "params": { 00:18:37.275 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.275 "allow_any_host": false, 00:18:37.275 "serial_number": "00000000000000000000", 00:18:37.275 "model_number": "SPDK bdev Controller", 00:18:37.275 "max_namespaces": 32, 00:18:37.275 "min_cntlid": 1, 00:18:37.275 "max_cntlid": 65519, 00:18:37.275 "ana_reporting": false 00:18:37.275 } 00:18:37.275 }, 00:18:37.275 { 00:18:37.275 "method": "nvmf_subsystem_add_host", 00:18:37.275 "params": { 00:18:37.275 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.275 "host": "nqn.2016-06.io.spdk:host1", 00:18:37.275 "psk": "key0" 00:18:37.275 } 00:18:37.275 }, 00:18:37.275 { 00:18:37.275 "method": "nvmf_subsystem_add_ns", 00:18:37.275 "params": { 00:18:37.275 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.275 "namespace": { 00:18:37.275 "nsid": 1, 00:18:37.275 "bdev_name": "malloc0", 00:18:37.275 "nguid": "7296810241D441CC915C140454BCEC8B", 00:18:37.275 "uuid": "72968102-41d4-41cc-915c-140454bcec8b", 00:18:37.275 "no_auto_visible": false 00:18:37.275 } 00:18:37.275 } 00:18:37.275 }, 00:18:37.275 { 00:18:37.275 "method": "nvmf_subsystem_add_listener", 00:18:37.275 "params": { 00:18:37.275 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.275 "listen_address": { 00:18:37.275 "trtype": "TCP", 00:18:37.275 "adrfam": "IPv4", 00:18:37.275 "traddr": "10.0.0.2", 00:18:37.275 "trsvcid": "4420" 00:18:37.275 }, 00:18:37.275 "secure_channel": false, 00:18:37.275 "sock_impl": "ssl" 00:18:37.275 } 00:18:37.275 } 00:18:37.275 ] 00:18:37.275 } 00:18:37.275 ] 00:18:37.275 }' 00:18:37.275 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@263 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:37.535 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@263 -- # bperfcfg='{ 00:18:37.535 "subsystems": [ 00:18:37.535 { 00:18:37.535 "subsystem": "keyring", 00:18:37.535 "config": [ 00:18:37.535 { 00:18:37.535 "method": "keyring_file_add_key", 00:18:37.535 "params": { 00:18:37.535 "name": "key0", 00:18:37.535 "path": "/tmp/tmp.90f9xnQ4x1" 00:18:37.535 } 00:18:37.535 } 00:18:37.535 ] 00:18:37.535 }, 00:18:37.535 { 00:18:37.535 "subsystem": "iobuf", 00:18:37.535 "config": [ 00:18:37.535 { 00:18:37.535 "method": "iobuf_set_options", 00:18:37.535 "params": { 00:18:37.535 "small_pool_count": 8192, 00:18:37.535 "large_pool_count": 1024, 00:18:37.535 "small_bufsize": 8192, 00:18:37.535 "large_bufsize": 135168, 00:18:37.535 "enable_numa": false 00:18:37.535 } 00:18:37.535 } 00:18:37.535 ] 00:18:37.535 }, 00:18:37.535 { 00:18:37.535 "subsystem": "sock", 00:18:37.535 "config": [ 00:18:37.535 { 00:18:37.535 "method": "sock_set_default_impl", 00:18:37.535 "params": { 00:18:37.535 "impl_name": "posix" 00:18:37.535 } 00:18:37.535 }, 00:18:37.535 { 00:18:37.535 "method": "sock_impl_set_options", 00:18:37.535 "params": { 00:18:37.535 "impl_name": "ssl", 00:18:37.535 "recv_buf_size": 4096, 00:18:37.535 "send_buf_size": 4096, 00:18:37.535 "enable_recv_pipe": true, 00:18:37.535 "enable_quickack": false, 00:18:37.535 "enable_placement_id": 0, 00:18:37.535 "enable_zerocopy_send_server": true, 00:18:37.535 "enable_zerocopy_send_client": false, 00:18:37.535 "zerocopy_threshold": 0, 00:18:37.535 "tls_version": 0, 00:18:37.535 "enable_ktls": false 00:18:37.535 } 00:18:37.536 }, 00:18:37.536 { 00:18:37.536 "method": "sock_impl_set_options", 00:18:37.536 "params": { 00:18:37.536 "impl_name": "posix", 00:18:37.536 "recv_buf_size": 2097152, 00:18:37.536 "send_buf_size": 2097152, 00:18:37.536 "enable_recv_pipe": true, 00:18:37.536 "enable_quickack": false, 00:18:37.536 "enable_placement_id": 0, 00:18:37.536 "enable_zerocopy_send_server": true, 00:18:37.536 "enable_zerocopy_send_client": false, 00:18:37.536 "zerocopy_threshold": 0, 00:18:37.536 "tls_version": 0, 00:18:37.536 "enable_ktls": false 00:18:37.536 } 00:18:37.536 } 00:18:37.536 ] 00:18:37.536 }, 00:18:37.536 { 00:18:37.536 "subsystem": "vmd", 00:18:37.536 "config": [] 00:18:37.536 }, 00:18:37.536 { 00:18:37.536 "subsystem": "accel", 00:18:37.536 "config": [ 00:18:37.536 { 00:18:37.536 "method": "accel_set_options", 00:18:37.536 "params": { 00:18:37.536 "small_cache_size": 128, 00:18:37.536 "large_cache_size": 16, 00:18:37.536 "task_count": 2048, 00:18:37.536 "sequence_count": 2048, 00:18:37.536 "buf_count": 2048 00:18:37.536 } 00:18:37.536 } 00:18:37.536 ] 00:18:37.536 }, 00:18:37.536 { 00:18:37.536 "subsystem": "bdev", 00:18:37.536 "config": [ 00:18:37.536 { 00:18:37.536 "method": "bdev_set_options", 00:18:37.536 "params": { 00:18:37.536 "bdev_io_pool_size": 65535, 00:18:37.536 "bdev_io_cache_size": 256, 00:18:37.536 "bdev_auto_examine": true, 00:18:37.536 "iobuf_small_cache_size": 128, 00:18:37.536 "iobuf_large_cache_size": 16 00:18:37.536 } 00:18:37.536 }, 00:18:37.536 { 00:18:37.536 "method": "bdev_raid_set_options", 00:18:37.536 "params": { 00:18:37.536 "process_window_size_kb": 1024, 00:18:37.536 "process_max_bandwidth_mb_sec": 0 00:18:37.536 } 00:18:37.536 }, 00:18:37.536 { 00:18:37.536 "method": "bdev_iscsi_set_options", 00:18:37.536 "params": { 00:18:37.536 "timeout_sec": 30 00:18:37.536 } 00:18:37.536 }, 00:18:37.536 { 00:18:37.536 "method": "bdev_nvme_set_options", 00:18:37.536 "params": { 00:18:37.536 "action_on_timeout": "none", 00:18:37.536 "timeout_us": 0, 00:18:37.536 "timeout_admin_us": 0, 00:18:37.536 "keep_alive_timeout_ms": 10000, 00:18:37.536 "arbitration_burst": 0, 00:18:37.536 "low_priority_weight": 0, 00:18:37.536 "medium_priority_weight": 0, 00:18:37.536 "high_priority_weight": 0, 00:18:37.536 "nvme_adminq_poll_period_us": 10000, 00:18:37.536 "nvme_ioq_poll_period_us": 0, 00:18:37.536 "io_queue_requests": 512, 00:18:37.536 "delay_cmd_submit": true, 00:18:37.536 "transport_retry_count": 4, 00:18:37.536 "bdev_retry_count": 3, 00:18:37.536 "transport_ack_timeout": 0, 00:18:37.536 "ctrlr_loss_timeout_sec": 0, 00:18:37.536 "reconnect_delay_sec": 0, 00:18:37.536 "fast_io_fail_timeout_sec": 0, 00:18:37.536 "disable_auto_failback": false, 00:18:37.536 "generate_uuids": false, 00:18:37.536 "transport_tos": 0, 00:18:37.536 "nvme_error_stat": false, 00:18:37.536 "rdma_srq_size": 0, 00:18:37.536 "io_path_stat": false, 00:18:37.536 "allow_accel_sequence": false, 00:18:37.536 "rdma_max_cq_size": 0, 00:18:37.536 "rdma_cm_event_timeout_ms": 0, 00:18:37.536 "dhchap_digests": [ 00:18:37.536 "sha256", 00:18:37.536 "sha384", 00:18:37.536 "sha512" 00:18:37.536 ], 00:18:37.536 "dhchap_dhgroups": [ 00:18:37.536 "null", 00:18:37.536 "ffdhe2048", 00:18:37.536 "ffdhe3072", 00:18:37.536 "ffdhe4096", 00:18:37.536 "ffdhe6144", 00:18:37.536 "ffdhe8192" 00:18:37.536 ] 00:18:37.536 } 00:18:37.536 }, 00:18:37.536 { 00:18:37.536 "method": "bdev_nvme_attach_controller", 00:18:37.536 "params": { 00:18:37.536 "name": "nvme0", 00:18:37.536 "trtype": "TCP", 00:18:37.536 "adrfam": "IPv4", 00:18:37.536 "traddr": "10.0.0.2", 00:18:37.536 "trsvcid": "4420", 00:18:37.536 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.536 "prchk_reftag": false, 00:18:37.536 "prchk_guard": false, 00:18:37.536 "ctrlr_loss_timeout_sec": 0, 00:18:37.536 "reconnect_delay_sec": 0, 00:18:37.536 "fast_io_fail_timeout_sec": 0, 00:18:37.536 "psk": "key0", 00:18:37.536 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:37.536 "hdgst": false, 00:18:37.536 "ddgst": false, 00:18:37.536 "multipath": "multipath" 00:18:37.536 } 00:18:37.536 }, 00:18:37.536 { 00:18:37.536 "method": "bdev_nvme_set_hotplug", 00:18:37.536 "params": { 00:18:37.536 "period_us": 100000, 00:18:37.536 "enable": false 00:18:37.536 } 00:18:37.536 }, 00:18:37.536 { 00:18:37.536 "method": "bdev_enable_histogram", 00:18:37.536 "params": { 00:18:37.536 "name": "nvme0n1", 00:18:37.536 "enable": true 00:18:37.536 } 00:18:37.536 }, 00:18:37.536 { 00:18:37.536 "method": "bdev_wait_for_examine" 00:18:37.536 } 00:18:37.536 ] 00:18:37.536 }, 00:18:37.536 { 00:18:37.536 "subsystem": "nbd", 00:18:37.536 "config": [] 00:18:37.536 } 00:18:37.536 ] 00:18:37.536 }' 00:18:37.536 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # killprocess 2359097 00:18:37.536 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2359097 ']' 00:18:37.536 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2359097 00:18:37.536 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:37.536 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:37.536 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2359097 00:18:37.536 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:37.536 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:37.536 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2359097' 00:18:37.536 killing process with pid 2359097 00:18:37.536 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2359097 00:18:37.536 Received shutdown signal, test time was about 1.000000 seconds 00:18:37.536 00:18:37.536 Latency(us) 00:18:37.536 [2024-11-20T08:02:53.577Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:37.536 [2024-11-20T08:02:53.577Z] =================================================================================================================== 00:18:37.536 [2024-11-20T08:02:53.577Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:37.536 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2359097 00:18:37.536 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # killprocess 2359075 00:18:37.536 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2359075 ']' 00:18:37.536 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2359075 00:18:37.536 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:37.536 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:37.536 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2359075 00:18:37.796 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:37.796 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:37.796 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2359075' 00:18:37.796 killing process with pid 2359075 00:18:37.796 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2359075 00:18:37.796 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2359075 00:18:37.796 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # nvmfappstart -c /dev/fd/62 00:18:37.796 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:18:37.796 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:37.796 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # echo '{ 00:18:37.796 "subsystems": [ 00:18:37.796 { 00:18:37.796 "subsystem": "keyring", 00:18:37.796 "config": [ 00:18:37.796 { 00:18:37.796 "method": "keyring_file_add_key", 00:18:37.796 "params": { 00:18:37.796 "name": "key0", 00:18:37.796 "path": "/tmp/tmp.90f9xnQ4x1" 00:18:37.796 } 00:18:37.796 } 00:18:37.796 ] 00:18:37.796 }, 00:18:37.796 { 00:18:37.796 "subsystem": "iobuf", 00:18:37.796 "config": [ 00:18:37.796 { 00:18:37.796 "method": "iobuf_set_options", 00:18:37.796 "params": { 00:18:37.796 "small_pool_count": 8192, 00:18:37.796 "large_pool_count": 1024, 00:18:37.796 "small_bufsize": 8192, 00:18:37.796 "large_bufsize": 135168, 00:18:37.796 "enable_numa": false 00:18:37.796 } 00:18:37.796 } 00:18:37.796 ] 00:18:37.796 }, 00:18:37.796 { 00:18:37.796 "subsystem": "sock", 00:18:37.796 "config": [ 00:18:37.796 { 00:18:37.796 "method": "sock_set_default_impl", 00:18:37.796 "params": { 00:18:37.796 "impl_name": "posix" 00:18:37.796 } 00:18:37.796 }, 00:18:37.796 { 00:18:37.796 "method": "sock_impl_set_options", 00:18:37.796 "params": { 00:18:37.796 "impl_name": "ssl", 00:18:37.796 "recv_buf_size": 4096, 00:18:37.796 "send_buf_size": 4096, 00:18:37.797 "enable_recv_pipe": true, 00:18:37.797 "enable_quickack": false, 00:18:37.797 "enable_placement_id": 0, 00:18:37.797 "enable_zerocopy_send_server": true, 00:18:37.797 "enable_zerocopy_send_client": false, 00:18:37.797 "zerocopy_threshold": 0, 00:18:37.797 "tls_version": 0, 00:18:37.797 "enable_ktls": false 00:18:37.797 } 00:18:37.797 }, 00:18:37.797 { 00:18:37.797 "method": "sock_impl_set_options", 00:18:37.797 "params": { 00:18:37.797 "impl_name": "posix", 00:18:37.797 "recv_buf_size": 2097152, 00:18:37.797 "send_buf_size": 2097152, 00:18:37.797 "enable_recv_pipe": true, 00:18:37.797 "enable_quickack": false, 00:18:37.797 "enable_placement_id": 0, 00:18:37.797 "enable_zerocopy_send_server": true, 00:18:37.797 "enable_zerocopy_send_client": false, 00:18:37.797 "zerocopy_threshold": 0, 00:18:37.797 "tls_version": 0, 00:18:37.797 "enable_ktls": false 00:18:37.797 } 00:18:37.797 } 00:18:37.797 ] 00:18:37.797 }, 00:18:37.797 { 00:18:37.797 "subsystem": "vmd", 00:18:37.797 "config": [] 00:18:37.797 }, 00:18:37.797 { 00:18:37.797 "subsystem": "accel", 00:18:37.797 "config": [ 00:18:37.797 { 00:18:37.797 "method": "accel_set_options", 00:18:37.797 "params": { 00:18:37.797 "small_cache_size": 128, 00:18:37.797 "large_cache_size": 16, 00:18:37.797 "task_count": 2048, 00:18:37.797 "sequence_count": 2048, 00:18:37.797 "buf_count": 2048 00:18:37.797 } 00:18:37.797 } 00:18:37.797 ] 00:18:37.797 }, 00:18:37.797 { 00:18:37.797 "subsystem": "bdev", 00:18:37.797 "config": [ 00:18:37.797 { 00:18:37.797 "method": "bdev_set_options", 00:18:37.797 "params": { 00:18:37.797 "bdev_io_pool_size": 65535, 00:18:37.797 "bdev_io_cache_size": 256, 00:18:37.797 "bdev_auto_examine": true, 00:18:37.797 "iobuf_small_cache_size": 128, 00:18:37.797 "iobuf_large_cache_size": 16 00:18:37.797 } 00:18:37.797 }, 00:18:37.797 { 00:18:37.797 "method": "bdev_raid_set_options", 00:18:37.797 "params": { 00:18:37.797 "process_window_size_kb": 1024, 00:18:37.797 "process_max_bandwidth_mb_sec": 0 00:18:37.797 } 00:18:37.797 }, 00:18:37.797 { 00:18:37.797 "method": "bdev_iscsi_set_options", 00:18:37.797 "params": { 00:18:37.797 "timeout_sec": 30 00:18:37.797 } 00:18:37.797 }, 00:18:37.797 { 00:18:37.797 "method": "bdev_nvme_set_options", 00:18:37.797 "params": { 00:18:37.797 "action_on_timeout": "none", 00:18:37.797 "timeout_us": 0, 00:18:37.797 "timeout_admin_us": 0, 00:18:37.797 "keep_alive_timeout_ms": 10000, 00:18:37.797 "arbitration_burst": 0, 00:18:37.797 "low_priority_weight": 0, 00:18:37.797 "medium_priority_weight": 0, 00:18:37.797 "high_priority_weight": 0, 00:18:37.797 "nvme_adminq_poll_period_us": 10000, 00:18:37.797 "nvme_ioq_poll_period_us": 0, 00:18:37.797 "io_queue_requests": 0, 00:18:37.797 "delay_cmd_submit": true, 00:18:37.797 "transport_retry_count": 4, 00:18:37.797 "bdev_retry_count": 3, 00:18:37.797 "transport_ack_timeout": 0, 00:18:37.797 "ctrlr_loss_timeout_sec": 0, 00:18:37.797 "reconnect_delay_sec": 0, 00:18:37.797 "fast_io_fail_timeout_sec": 0, 00:18:37.797 "disable_auto_failback": false, 00:18:37.797 "generate_uuids": false, 00:18:37.797 "transport_tos": 0, 00:18:37.797 "nvme_error_stat": false, 00:18:37.797 "rdma_srq_size": 0, 00:18:37.797 "io_path_stat": false, 00:18:37.797 "allow_accel_sequence": false, 00:18:37.797 "rdma_max_cq_size": 0, 00:18:37.797 "rdma_cm_event_timeout_ms": 0, 00:18:37.797 "dhchap_digests": [ 00:18:37.797 "sha256", 00:18:37.797 "sha384", 00:18:37.797 "sha512" 00:18:37.797 ], 00:18:37.797 "dhchap_dhgroups": [ 00:18:37.797 "null", 00:18:37.797 "ffdhe2048", 00:18:37.797 "ffdhe3072", 00:18:37.797 "ffdhe4096", 00:18:37.797 "ffdhe6144", 00:18:37.797 "ffdhe8192" 00:18:37.797 ] 00:18:37.797 } 00:18:37.797 }, 00:18:37.797 { 00:18:37.797 "method": "bdev_nvme_set_hotplug", 00:18:37.797 "params": { 00:18:37.797 "period_us": 100000, 00:18:37.797 "enable": false 00:18:37.797 } 00:18:37.797 }, 00:18:37.797 { 00:18:37.797 "method": "bdev_malloc_create", 00:18:37.797 "params": { 00:18:37.797 "name": "malloc0", 00:18:37.797 "num_blocks": 8192, 00:18:37.797 "block_size": 4096, 00:18:37.797 "physical_block_size": 4096, 00:18:37.797 "uuid": "72968102-41d4-41cc-915c-140454bcec8b", 00:18:37.797 "optimal_io_boundary": 0, 00:18:37.797 "md_size": 0, 00:18:37.797 "dif_type": 0, 00:18:37.797 "dif_is_head_of_md": false, 00:18:37.797 "dif_pi_format": 0 00:18:37.797 } 00:18:37.797 }, 00:18:37.797 { 00:18:37.797 "method": "bdev_wait_for_examine" 00:18:37.797 } 00:18:37.797 ] 00:18:37.797 }, 00:18:37.797 { 00:18:37.797 "subsystem": "nbd", 00:18:37.797 "config": [] 00:18:37.797 }, 00:18:37.797 { 00:18:37.797 "subsystem": "scheduler", 00:18:37.797 "config": [ 00:18:37.797 { 00:18:37.797 "method": "framework_set_scheduler", 00:18:37.797 "params": { 00:18:37.797 "name": "static" 00:18:37.797 } 00:18:37.797 } 00:18:37.797 ] 00:18:37.797 }, 00:18:37.797 { 00:18:37.797 "subsystem": "nvmf", 00:18:37.797 "config": [ 00:18:37.797 { 00:18:37.797 "method": "nvmf_set_config", 00:18:37.797 "params": { 00:18:37.797 "discovery_filter": "match_any", 00:18:37.797 "admin_cmd_passthru": { 00:18:37.797 "identify_ctrlr": false 00:18:37.797 }, 00:18:37.797 "dhchap_digests": [ 00:18:37.797 "sha256", 00:18:37.797 "sha384", 00:18:37.797 "sha512" 00:18:37.797 ], 00:18:37.797 "dhchap_dhgroups": [ 00:18:37.797 "null", 00:18:37.797 "ffdhe2048", 00:18:37.797 "ffdhe3072", 00:18:37.797 "ffdhe4096", 00:18:37.797 "ffdhe6144", 00:18:37.797 "ffdhe8192" 00:18:37.797 ] 00:18:37.797 } 00:18:37.797 }, 00:18:37.797 { 00:18:37.797 "method": "nvmf_set_max_subsystems", 00:18:37.797 "params": { 00:18:37.797 "max_subsystems": 1024 00:18:37.797 } 00:18:37.797 }, 00:18:37.797 { 00:18:37.797 "method": "nvmf_set_crdt", 00:18:37.797 "params": { 00:18:37.797 "crdt1": 0, 00:18:37.797 "crdt2": 0, 00:18:37.797 "crdt3": 0 00:18:37.797 } 00:18:37.797 }, 00:18:37.797 { 00:18:37.797 "method": "nvmf_create_transport", 00:18:37.797 "params": { 00:18:37.797 "trtype": "TCP", 00:18:37.797 "max_queue_depth": 128, 00:18:37.797 "max_io_qpairs_per_ctrlr": 127, 00:18:37.797 "in_capsule_data_size": 4096, 00:18:37.797 "max_io_size": 131072, 00:18:37.797 "io_unit_size": 131072, 00:18:37.797 "max_aq_depth": 128, 00:18:37.797 "num_shared_buffers": 511, 00:18:37.797 "buf_cache_size": 4294967295, 00:18:37.797 "dif_insert_or_strip": false, 00:18:37.797 "zcopy": false, 00:18:37.797 "c2h_success": false, 00:18:37.797 "sock_priority": 0, 00:18:37.797 "abort_timeout_sec": 1, 00:18:37.797 "ack_timeout": 0, 00:18:37.797 "data_wr_pool_size": 0 00:18:37.797 } 00:18:37.797 }, 00:18:37.797 { 00:18:37.797 "method": "nvmf_create_subsystem", 00:18:37.797 "params": { 00:18:37.797 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.797 "allow_any_host": false, 00:18:37.797 "serial_number": "00000000000000000000", 00:18:37.797 "model_number": "SPDK bdev Controller", 00:18:37.797 "max_namespaces": 32, 00:18:37.797 "min_cntlid": 1, 00:18:37.797 "max_cntlid": 65519, 00:18:37.797 "ana_reporting": false 00:18:37.797 } 00:18:37.797 }, 00:18:37.797 { 00:18:37.797 "method": "nvmf_subsystem_add_host", 00:18:37.797 "params": { 00:18:37.797 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.797 "host": "nqn.2016-06.io.spdk:host1", 00:18:37.797 "psk": "key0" 00:18:37.797 } 00:18:37.797 }, 00:18:37.797 { 00:18:37.797 "method": "nvmf_subsystem_add_ns", 00:18:37.797 "params": { 00:18:37.797 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.797 "namespace": { 00:18:37.797 "nsid": 1, 00:18:37.797 "bdev_name": "malloc0", 00:18:37.797 "nguid": "7296810241D441CC915C140454BCEC8B", 00:18:37.797 "uuid": "72968102-41d4-41cc-915c-140454bcec8b", 00:18:37.797 "no_auto_visible": false 00:18:37.797 } 00:18:37.797 } 00:18:37.797 }, 00:18:37.797 { 00:18:37.797 "method": "nvmf_subsystem_add_listener", 00:18:37.797 "params": { 00:18:37.797 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:37.797 "listen_address": { 00:18:37.797 "trtype": "TCP", 00:18:37.797 "adrfam": "IPv4", 00:18:37.797 "traddr": "10.0.0.2", 00:18:37.797 "trsvcid": "4420" 00:18:37.797 }, 00:18:37.797 "secure_channel": false, 00:18:37.797 "sock_impl": "ssl" 00:18:37.797 } 00:18:37.797 } 00:18:37.797 ] 00:18:37.797 } 00:18:37.797 ] 00:18:37.797 }' 00:18:37.797 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.797 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=2359571 00:18:37.797 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:37.797 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 2359571 00:18:37.797 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2359571 ']' 00:18:37.797 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:37.797 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:37.797 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:37.797 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:37.797 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:37.797 09:02:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:37.797 [2024-11-20 09:02:53.835106] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:18:37.797 [2024-11-20 09:02:53.835152] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:38.057 [2024-11-20 09:02:53.913198] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.057 [2024-11-20 09:02:53.949196] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:38.057 [2024-11-20 09:02:53.949230] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:38.057 [2024-11-20 09:02:53.949237] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:38.057 [2024-11-20 09:02:53.949242] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:38.057 [2024-11-20 09:02:53.949247] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:38.057 [2024-11-20 09:02:53.949857] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:38.316 [2024-11-20 09:02:54.163484] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:38.316 [2024-11-20 09:02:54.195517] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:38.316 [2024-11-20 09:02:54.195724] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:38.886 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:38.886 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:38.886 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:18:38.886 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:38.886 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.886 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:38.886 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # bdevperf_pid=2359817 00:18:38.886 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # waitforlisten 2359817 /var/tmp/bdevperf.sock 00:18:38.886 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 2359817 ']' 00:18:38.886 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:38.886 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:38.886 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:38.886 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:38.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:38.886 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:18:38.886 "subsystems": [ 00:18:38.886 { 00:18:38.886 "subsystem": "keyring", 00:18:38.886 "config": [ 00:18:38.886 { 00:18:38.886 "method": "keyring_file_add_key", 00:18:38.886 "params": { 00:18:38.886 "name": "key0", 00:18:38.886 "path": "/tmp/tmp.90f9xnQ4x1" 00:18:38.886 } 00:18:38.886 } 00:18:38.886 ] 00:18:38.886 }, 00:18:38.886 { 00:18:38.886 "subsystem": "iobuf", 00:18:38.886 "config": [ 00:18:38.886 { 00:18:38.886 "method": "iobuf_set_options", 00:18:38.886 "params": { 00:18:38.886 "small_pool_count": 8192, 00:18:38.886 "large_pool_count": 1024, 00:18:38.886 "small_bufsize": 8192, 00:18:38.886 "large_bufsize": 135168, 00:18:38.886 "enable_numa": false 00:18:38.886 } 00:18:38.886 } 00:18:38.886 ] 00:18:38.886 }, 00:18:38.886 { 00:18:38.886 "subsystem": "sock", 00:18:38.886 "config": [ 00:18:38.886 { 00:18:38.886 "method": "sock_set_default_impl", 00:18:38.886 "params": { 00:18:38.886 "impl_name": "posix" 00:18:38.886 } 00:18:38.886 }, 00:18:38.886 { 00:18:38.886 "method": "sock_impl_set_options", 00:18:38.886 "params": { 00:18:38.886 "impl_name": "ssl", 00:18:38.886 "recv_buf_size": 4096, 00:18:38.886 "send_buf_size": 4096, 00:18:38.886 "enable_recv_pipe": true, 00:18:38.886 "enable_quickack": false, 00:18:38.886 "enable_placement_id": 0, 00:18:38.886 "enable_zerocopy_send_server": true, 00:18:38.886 "enable_zerocopy_send_client": false, 00:18:38.886 "zerocopy_threshold": 0, 00:18:38.886 "tls_version": 0, 00:18:38.886 "enable_ktls": false 00:18:38.886 } 00:18:38.886 }, 00:18:38.886 { 00:18:38.886 "method": "sock_impl_set_options", 00:18:38.886 "params": { 00:18:38.886 "impl_name": "posix", 00:18:38.886 "recv_buf_size": 2097152, 00:18:38.886 "send_buf_size": 2097152, 00:18:38.886 "enable_recv_pipe": true, 00:18:38.886 "enable_quickack": false, 00:18:38.886 "enable_placement_id": 0, 00:18:38.886 "enable_zerocopy_send_server": true, 00:18:38.886 "enable_zerocopy_send_client": false, 00:18:38.886 "zerocopy_threshold": 0, 00:18:38.886 "tls_version": 0, 00:18:38.886 "enable_ktls": false 00:18:38.886 } 00:18:38.886 } 00:18:38.886 ] 00:18:38.886 }, 00:18:38.886 { 00:18:38.886 "subsystem": "vmd", 00:18:38.886 "config": [] 00:18:38.886 }, 00:18:38.886 { 00:18:38.886 "subsystem": "accel", 00:18:38.886 "config": [ 00:18:38.886 { 00:18:38.886 "method": "accel_set_options", 00:18:38.886 "params": { 00:18:38.886 "small_cache_size": 128, 00:18:38.886 "large_cache_size": 16, 00:18:38.886 "task_count": 2048, 00:18:38.886 "sequence_count": 2048, 00:18:38.886 "buf_count": 2048 00:18:38.886 } 00:18:38.886 } 00:18:38.886 ] 00:18:38.886 }, 00:18:38.886 { 00:18:38.886 "subsystem": "bdev", 00:18:38.886 "config": [ 00:18:38.886 { 00:18:38.886 "method": "bdev_set_options", 00:18:38.886 "params": { 00:18:38.886 "bdev_io_pool_size": 65535, 00:18:38.886 "bdev_io_cache_size": 256, 00:18:38.886 "bdev_auto_examine": true, 00:18:38.886 "iobuf_small_cache_size": 128, 00:18:38.886 "iobuf_large_cache_size": 16 00:18:38.886 } 00:18:38.886 }, 00:18:38.886 { 00:18:38.886 "method": "bdev_raid_set_options", 00:18:38.886 "params": { 00:18:38.886 "process_window_size_kb": 1024, 00:18:38.886 "process_max_bandwidth_mb_sec": 0 00:18:38.886 } 00:18:38.886 }, 00:18:38.886 { 00:18:38.886 "method": "bdev_iscsi_set_options", 00:18:38.886 "params": { 00:18:38.886 "timeout_sec": 30 00:18:38.886 } 00:18:38.886 }, 00:18:38.886 { 00:18:38.886 "method": "bdev_nvme_set_options", 00:18:38.886 "params": { 00:18:38.886 "action_on_timeout": "none", 00:18:38.886 "timeout_us": 0, 00:18:38.886 "timeout_admin_us": 0, 00:18:38.886 "keep_alive_timeout_ms": 10000, 00:18:38.886 "arbitration_burst": 0, 00:18:38.886 "low_priority_weight": 0, 00:18:38.886 "medium_priority_weight": 0, 00:18:38.886 "high_priority_weight": 0, 00:18:38.886 "nvme_adminq_poll_period_us": 10000, 00:18:38.886 "nvme_ioq_poll_period_us": 0, 00:18:38.886 "io_queue_requests": 512, 00:18:38.886 "delay_cmd_submit": true, 00:18:38.886 "transport_retry_count": 4, 00:18:38.886 "bdev_retry_count": 3, 00:18:38.886 "transport_ack_timeout": 0, 00:18:38.886 "ctrlr_loss_timeout_sec": 0, 00:18:38.886 "reconnect_delay_sec": 0, 00:18:38.886 "fast_io_fail_timeout_sec": 0, 00:18:38.886 "disable_auto_failback": false, 00:18:38.886 "generate_uuids": false, 00:18:38.886 "transport_tos": 0, 00:18:38.886 "nvme_error_stat": false, 00:18:38.887 "rdma_srq_size": 0, 00:18:38.887 "io_path_stat": false, 00:18:38.887 "allow_accel_sequence": false, 00:18:38.887 "rdma_max_cq_size": 0, 00:18:38.887 "rdma_cm_event_timeout_ms": 0, 00:18:38.887 "dhchap_digests": [ 00:18:38.887 "sha256", 00:18:38.887 "sha384", 00:18:38.887 "sha512" 00:18:38.887 ], 00:18:38.887 "dhchap_dhgroups": [ 00:18:38.887 "null", 00:18:38.887 "ffdhe2048", 00:18:38.887 "ffdhe3072", 00:18:38.887 "ffdhe4096", 00:18:38.887 "ffdhe6144", 00:18:38.887 "ffdhe8192" 00:18:38.887 ] 00:18:38.887 } 00:18:38.887 }, 00:18:38.887 { 00:18:38.887 "method": "bdev_nvme_attach_controller", 00:18:38.887 "params": { 00:18:38.887 "name": "nvme0", 00:18:38.887 "trtype": "TCP", 00:18:38.887 "adrfam": "IPv4", 00:18:38.887 "traddr": "10.0.0.2", 00:18:38.887 "trsvcid": "4420", 00:18:38.887 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:38.887 "prchk_reftag": false, 00:18:38.887 "prchk_guard": false, 00:18:38.887 "ctrlr_loss_timeout_sec": 0, 00:18:38.887 "reconnect_delay_sec": 0, 00:18:38.887 "fast_io_fail_timeout_sec": 0, 00:18:38.887 "psk": "key0", 00:18:38.887 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:38.887 "hdgst": false, 00:18:38.887 "ddgst": false, 00:18:38.887 "multipath": "multipath" 00:18:38.887 } 00:18:38.887 }, 00:18:38.887 { 00:18:38.887 "method": "bdev_nvme_set_hotplug", 00:18:38.887 "params": { 00:18:38.887 "period_us": 100000, 00:18:38.887 "enable": false 00:18:38.887 } 00:18:38.887 }, 00:18:38.887 { 00:18:38.887 "method": "bdev_enable_histogram", 00:18:38.887 "params": { 00:18:38.887 "name": "nvme0n1", 00:18:38.887 "enable": true 00:18:38.887 } 00:18:38.887 }, 00:18:38.887 { 00:18:38.887 "method": "bdev_wait_for_examine" 00:18:38.887 } 00:18:38.887 ] 00:18:38.887 }, 00:18:38.887 { 00:18:38.887 "subsystem": "nbd", 00:18:38.887 "config": [] 00:18:38.887 } 00:18:38.887 ] 00:18:38.887 }' 00:18:38.887 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:38.887 09:02:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:38.887 [2024-11-20 09:02:54.750791] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:18:38.887 [2024-11-20 09:02:54.750840] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2359817 ] 00:18:38.887 [2024-11-20 09:02:54.823616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:38.887 [2024-11-20 09:02:54.864719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:39.146 [2024-11-20 09:02:55.016826] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:39.714 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.714 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:39.714 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:39.714 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # jq -r '.[].name' 00:18:39.973 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:39.973 09:02:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:39.973 Running I/O for 1 seconds... 00:18:40.910 5229.00 IOPS, 20.43 MiB/s 00:18:40.910 Latency(us) 00:18:40.910 [2024-11-20T08:02:56.951Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:40.910 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:40.910 Verification LBA range: start 0x0 length 0x2000 00:18:40.910 nvme0n1 : 1.01 5289.88 20.66 0.00 0.00 24035.59 5299.87 32369.09 00:18:40.910 [2024-11-20T08:02:56.951Z] =================================================================================================================== 00:18:40.910 [2024-11-20T08:02:56.951Z] Total : 5289.88 20.66 0.00 0.00 24035.59 5299.87 32369.09 00:18:40.910 { 00:18:40.910 "results": [ 00:18:40.910 { 00:18:40.910 "job": "nvme0n1", 00:18:40.910 "core_mask": "0x2", 00:18:40.910 "workload": "verify", 00:18:40.910 "status": "finished", 00:18:40.910 "verify_range": { 00:18:40.910 "start": 0, 00:18:40.910 "length": 8192 00:18:40.910 }, 00:18:40.910 "queue_depth": 128, 00:18:40.910 "io_size": 4096, 00:18:40.910 "runtime": 1.012877, 00:18:40.910 "iops": 5289.88218707701, 00:18:40.910 "mibps": 20.66360229326957, 00:18:40.910 "io_failed": 0, 00:18:40.910 "io_timeout": 0, 00:18:40.910 "avg_latency_us": 24035.58884853206, 00:18:40.910 "min_latency_us": 5299.8678260869565, 00:18:40.910 "max_latency_us": 32369.085217391304 00:18:40.910 } 00:18:40.910 ], 00:18:40.910 "core_count": 1 00:18:40.910 } 00:18:40.910 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # trap - SIGINT SIGTERM EXIT 00:18:40.910 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # cleanup 00:18:40.910 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:40.910 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:18:40.910 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:18:40.910 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:18:40.910 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:41.169 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:18:41.169 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:18:41.169 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:18:41.169 09:02:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:41.169 nvmf_trace.0 00:18:41.169 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:18:41.169 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2359817 00:18:41.169 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2359817 ']' 00:18:41.169 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2359817 00:18:41.169 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:41.169 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.169 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2359817 00:18:41.169 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:41.169 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:41.169 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2359817' 00:18:41.169 killing process with pid 2359817 00:18:41.169 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2359817 00:18:41.169 Received shutdown signal, test time was about 1.000000 seconds 00:18:41.169 00:18:41.169 Latency(us) 00:18:41.169 [2024-11-20T08:02:57.210Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:41.169 [2024-11-20T08:02:57.210Z] =================================================================================================================== 00:18:41.169 [2024-11-20T08:02:57.210Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:41.169 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2359817 00:18:41.429 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:41.429 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # nvmfcleanup 00:18:41.429 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@99 -- # sync 00:18:41.429 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:18:41.429 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@102 -- # set +e 00:18:41.429 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@103 -- # for i in {1..20} 00:18:41.429 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:18:41.429 rmmod nvme_tcp 00:18:41.429 rmmod nvme_fabrics 00:18:41.429 rmmod nvme_keyring 00:18:41.429 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:18:41.429 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@106 -- # set -e 00:18:41.429 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@107 -- # return 0 00:18:41.429 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # '[' -n 2359571 ']' 00:18:41.429 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@337 -- # killprocess 2359571 00:18:41.430 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 2359571 ']' 00:18:41.430 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 2359571 00:18:41.430 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:41.430 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:41.430 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2359571 00:18:41.430 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:41.430 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:41.430 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2359571' 00:18:41.430 killing process with pid 2359571 00:18:41.430 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 2359571 00:18:41.430 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 2359571 00:18:41.690 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:18:41.690 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # nvmf_fini 00:18:41.690 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@264 -- # local dev 00:18:41.690 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@267 -- # remove_target_ns 00:18:41.690 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:41.690 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:41.690 09:02:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:43.614 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@268 -- # delete_main_bridge 00:18:43.614 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:18:43.614 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@130 -- # return 0 00:18:43.614 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:18:43.614 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:18:43.614 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:18:43.614 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:18:43.614 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:18:43.614 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:18:43.614 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:18:43.614 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:18:43.614 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:18:43.614 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:18:43.614 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:18:43.614 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:18:43.614 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:18:43.614 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:18:43.614 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:18:43.614 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:18:43.614 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:18:43.614 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@41 -- # _dev=0 00:18:43.614 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@41 -- # dev_map=() 00:18:43.614 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@284 -- # iptr 00:18:43.614 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@542 -- # iptables-save 00:18:43.614 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:18:43.614 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@542 -- # iptables-restore 00:18:43.614 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.Ne859tonNo /tmp/tmp.kWQxPKaM4d /tmp/tmp.90f9xnQ4x1 00:18:43.614 00:18:43.614 real 1m19.852s 00:18:43.614 user 2m2.615s 00:18:43.614 sys 0m30.345s 00:18:43.614 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:43.614 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.614 ************************************ 00:18:43.614 END TEST nvmf_tls 00:18:43.614 ************************************ 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:43.874 ************************************ 00:18:43.874 START TEST nvmf_fips 00:18:43.874 ************************************ 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:43.874 * Looking for test storage... 00:18:43.874 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:43.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.874 --rc genhtml_branch_coverage=1 00:18:43.874 --rc genhtml_function_coverage=1 00:18:43.874 --rc genhtml_legend=1 00:18:43.874 --rc geninfo_all_blocks=1 00:18:43.874 --rc geninfo_unexecuted_blocks=1 00:18:43.874 00:18:43.874 ' 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:43.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.874 --rc genhtml_branch_coverage=1 00:18:43.874 --rc genhtml_function_coverage=1 00:18:43.874 --rc genhtml_legend=1 00:18:43.874 --rc geninfo_all_blocks=1 00:18:43.874 --rc geninfo_unexecuted_blocks=1 00:18:43.874 00:18:43.874 ' 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:43.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.874 --rc genhtml_branch_coverage=1 00:18:43.874 --rc genhtml_function_coverage=1 00:18:43.874 --rc genhtml_legend=1 00:18:43.874 --rc geninfo_all_blocks=1 00:18:43.874 --rc geninfo_unexecuted_blocks=1 00:18:43.874 00:18:43.874 ' 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:43.874 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:43.874 --rc genhtml_branch_coverage=1 00:18:43.874 --rc genhtml_function_coverage=1 00:18:43.874 --rc genhtml_legend=1 00:18:43.874 --rc geninfo_all_blocks=1 00:18:43.874 --rc geninfo_unexecuted_blocks=1 00:18:43.874 00:18:43.874 ' 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:18:43.874 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:18:43.875 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:43.875 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:43.875 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:18:43.875 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:43.875 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:43.875 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:43.875 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.875 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.875 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.875 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:43.875 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:43.875 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:18:43.875 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:18:43.875 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:43.875 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:18:43.875 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@50 -- # : 0 00:18:43.875 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:18:43.875 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:18:43.875 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:18:43.875 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:43.875 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:43.875 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:18:43.875 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:18:43.875 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:18:43.875 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:18:43.875 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@54 -- # have_pci_nics=0 00:18:43.875 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:43.875 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:18:43.875 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:18:44.134 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:18:44.134 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:18:44.134 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:18:44.134 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:18:44.134 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:44.134 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:18:44.135 09:02:59 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:18:44.135 Error setting digest 00:18:44.135 40B2E9ABDB7F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:18:44.135 40B2E9ABDB7F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # prepare_net_devs 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # local -g is_hw=no 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # remove_target_ns 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # xtrace_disable 00:18:44.135 09:03:00 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@131 -- # pci_devs=() 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@131 -- # local -a pci_devs 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@132 -- # pci_net_devs=() 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@133 -- # pci_drivers=() 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@133 -- # local -A pci_drivers 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@135 -- # net_devs=() 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@135 -- # local -ga net_devs 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@136 -- # e810=() 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@136 -- # local -ga e810 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@137 -- # x722=() 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@137 -- # local -ga x722 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@138 -- # mlx=() 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@138 -- # local -ga mlx 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:50.710 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:50.710 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # [[ up == up ]] 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:50.710 Found net devices under 0000:86:00.0: cvl_0_0 00:18:50.710 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # [[ up == up ]] 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:50.711 Found net devices under 0000:86:00.1: cvl_0_1 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # is_hw=yes 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@257 -- # create_target_ns 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@27 -- # local -gA dev_map 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@28 -- # local -g _dev 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # ips=() 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772161 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:18:50.711 10.0.0.1 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772162 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:18:50.711 10.0.0.2 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:18:50.711 09:03:05 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:18:50.711 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:18:50.711 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:18:50.711 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:18:50.711 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:18:50.711 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:18:50.711 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:18:50.711 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:18:50.711 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:18:50.711 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:50.711 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@38 -- # ping_ips 1 00:18:50.711 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:18:50.711 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:18:50.711 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:18:50.711 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:18:50.711 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:18:50.711 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:18:50.711 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:18:50.711 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:50.711 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:18:50.711 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=initiator0 00:18:50.711 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:18:50.711 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:18:50.711 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:18:50.712 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:50.712 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.344 ms 00:18:50.712 00:18:50.712 --- 10.0.0.1 ping statistics --- 00:18:50.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.712 rtt min/avg/max/mdev = 0.344/0.344/0.344/0.000 ms 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev target0 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=target0 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:18:50.712 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:50.712 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.114 ms 00:18:50.712 00:18:50.712 --- 10.0.0.2 ping statistics --- 00:18:50.712 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:50.712 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # (( pair++ )) 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # return 0 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=initiator0 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=initiator1 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # return 1 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev= 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@169 -- # return 0 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev target0 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=target0 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev target1 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=target1 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # return 1 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev= 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@169 -- # return 0 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:18:50.712 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:50.713 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:18:50.713 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:18:50.713 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:18:50.713 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:18:50.713 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:50.713 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:50.713 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # nvmfpid=2363987 00:18:50.713 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:50.713 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # waitforlisten 2363987 00:18:50.713 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2363987 ']' 00:18:50.713 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:50.713 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.713 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:50.713 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:50.713 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.713 09:03:06 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:50.713 [2024-11-20 09:03:06.249326] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:18:50.713 [2024-11-20 09:03:06.249375] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:50.713 [2024-11-20 09:03:06.326551] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.713 [2024-11-20 09:03:06.367696] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:50.713 [2024-11-20 09:03:06.367735] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:50.713 [2024-11-20 09:03:06.367742] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:50.713 [2024-11-20 09:03:06.367748] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:50.713 [2024-11-20 09:03:06.367754] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:50.713 [2024-11-20 09:03:06.368323] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:51.280 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.280 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:51.280 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:18:51.280 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:51.280 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:51.280 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:51.280 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:18:51.280 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:51.280 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:18:51.280 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.kBl 00:18:51.280 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:51.280 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.kBl 00:18:51.280 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.kBl 00:18:51.280 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.kBl 00:18:51.280 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:51.280 [2024-11-20 09:03:07.297686] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:51.280 [2024-11-20 09:03:07.313696] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:51.280 [2024-11-20 09:03:07.313892] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:51.539 malloc0 00:18:51.539 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:51.539 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=2364234 00:18:51.539 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:51.539 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 2364234 /var/tmp/bdevperf.sock 00:18:51.539 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 2364234 ']' 00:18:51.539 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:51.539 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:51.539 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:51.539 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:51.539 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:51.539 09:03:07 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:51.539 [2024-11-20 09:03:07.446241] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:18:51.539 [2024-11-20 09:03:07.446293] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2364234 ] 00:18:51.539 [2024-11-20 09:03:07.519181] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.539 [2024-11-20 09:03:07.561426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:52.476 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:52.476 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:18:52.476 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.kBl 00:18:52.476 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:52.735 [2024-11-20 09:03:08.670639] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:52.735 TLSTESTn1 00:18:52.735 09:03:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:52.993 Running I/O for 10 seconds... 00:18:54.866 5225.00 IOPS, 20.41 MiB/s [2024-11-20T08:03:12.285Z] 5300.00 IOPS, 20.70 MiB/s [2024-11-20T08:03:13.222Z] 5321.67 IOPS, 20.79 MiB/s [2024-11-20T08:03:14.157Z] 5364.75 IOPS, 20.96 MiB/s [2024-11-20T08:03:15.093Z] 5377.20 IOPS, 21.00 MiB/s [2024-11-20T08:03:16.030Z] 5353.00 IOPS, 20.91 MiB/s [2024-11-20T08:03:16.967Z] 5323.29 IOPS, 20.79 MiB/s [2024-11-20T08:03:17.903Z] 5288.25 IOPS, 20.66 MiB/s [2024-11-20T08:03:19.282Z] 5238.44 IOPS, 20.46 MiB/s [2024-11-20T08:03:19.283Z] 5217.90 IOPS, 20.38 MiB/s 00:19:03.242 Latency(us) 00:19:03.242 [2024-11-20T08:03:19.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.242 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:03.242 Verification LBA range: start 0x0 length 0x2000 00:19:03.242 TLSTESTn1 : 10.02 5221.60 20.40 0.00 0.00 24476.96 7265.95 30773.43 00:19:03.242 [2024-11-20T08:03:19.283Z] =================================================================================================================== 00:19:03.242 [2024-11-20T08:03:19.283Z] Total : 5221.60 20.40 0.00 0.00 24476.96 7265.95 30773.43 00:19:03.242 { 00:19:03.242 "results": [ 00:19:03.242 { 00:19:03.242 "job": "TLSTESTn1", 00:19:03.242 "core_mask": "0x4", 00:19:03.242 "workload": "verify", 00:19:03.242 "status": "finished", 00:19:03.242 "verify_range": { 00:19:03.242 "start": 0, 00:19:03.242 "length": 8192 00:19:03.242 }, 00:19:03.242 "queue_depth": 128, 00:19:03.242 "io_size": 4096, 00:19:03.242 "runtime": 10.017229, 00:19:03.242 "iops": 5221.603698987015, 00:19:03.242 "mibps": 20.39688944916803, 00:19:03.242 "io_failed": 0, 00:19:03.242 "io_timeout": 0, 00:19:03.242 "avg_latency_us": 24476.96450406388, 00:19:03.242 "min_latency_us": 7265.947826086956, 00:19:03.242 "max_latency_us": 30773.426086956522 00:19:03.242 } 00:19:03.242 ], 00:19:03.242 "core_count": 1 00:19:03.242 } 00:19:03.242 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:03.242 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:03.242 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:03.242 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:03.242 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:03.242 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:03.242 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:03.242 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:03.242 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:03.242 09:03:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:03.242 nvmf_trace.0 00:19:03.242 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:03.242 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2364234 00:19:03.242 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2364234 ']' 00:19:03.242 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2364234 00:19:03.242 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:03.242 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:03.242 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2364234 00:19:03.242 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:03.242 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:03.242 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2364234' 00:19:03.242 killing process with pid 2364234 00:19:03.242 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2364234 00:19:03.242 Received shutdown signal, test time was about 10.000000 seconds 00:19:03.242 00:19:03.242 Latency(us) 00:19:03.242 [2024-11-20T08:03:19.283Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:03.242 [2024-11-20T08:03:19.283Z] =================================================================================================================== 00:19:03.242 [2024-11-20T08:03:19.283Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:03.242 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2364234 00:19:03.242 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:03.242 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # nvmfcleanup 00:19:03.242 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@99 -- # sync 00:19:03.242 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:03.242 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@102 -- # set +e 00:19:03.242 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:03.242 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:03.242 rmmod nvme_tcp 00:19:03.242 rmmod nvme_fabrics 00:19:03.242 rmmod nvme_keyring 00:19:03.242 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:03.502 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@106 -- # set -e 00:19:03.502 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@107 -- # return 0 00:19:03.502 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # '[' -n 2363987 ']' 00:19:03.502 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@337 -- # killprocess 2363987 00:19:03.502 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 2363987 ']' 00:19:03.502 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 2363987 00:19:03.502 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:03.502 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:03.502 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2363987 00:19:03.502 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:03.502 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:03.502 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2363987' 00:19:03.502 killing process with pid 2363987 00:19:03.502 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 2363987 00:19:03.502 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 2363987 00:19:03.502 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:19:03.502 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # nvmf_fini 00:19:03.502 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@264 -- # local dev 00:19:03.502 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@267 -- # remove_target_ns 00:19:03.502 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:03.502 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:03.502 09:03:19 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@268 -- # delete_main_bridge 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@130 -- # return 0 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@41 -- # _dev=0 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@41 -- # dev_map=() 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@284 -- # iptr 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@542 -- # iptables-save 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@542 -- # iptables-restore 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.kBl 00:19:06.042 00:19:06.042 real 0m21.903s 00:19:06.042 user 0m23.153s 00:19:06.042 sys 0m10.288s 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:06.042 ************************************ 00:19:06.042 END TEST nvmf_fips 00:19:06.042 ************************************ 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:06.042 ************************************ 00:19:06.042 START TEST nvmf_control_msg_list 00:19:06.042 ************************************ 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:06.042 * Looking for test storage... 00:19:06.042 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:06.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.042 --rc genhtml_branch_coverage=1 00:19:06.042 --rc genhtml_function_coverage=1 00:19:06.042 --rc genhtml_legend=1 00:19:06.042 --rc geninfo_all_blocks=1 00:19:06.042 --rc geninfo_unexecuted_blocks=1 00:19:06.042 00:19:06.042 ' 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:06.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.042 --rc genhtml_branch_coverage=1 00:19:06.042 --rc genhtml_function_coverage=1 00:19:06.042 --rc genhtml_legend=1 00:19:06.042 --rc geninfo_all_blocks=1 00:19:06.042 --rc geninfo_unexecuted_blocks=1 00:19:06.042 00:19:06.042 ' 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:06.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.042 --rc genhtml_branch_coverage=1 00:19:06.042 --rc genhtml_function_coverage=1 00:19:06.042 --rc genhtml_legend=1 00:19:06.042 --rc geninfo_all_blocks=1 00:19:06.042 --rc geninfo_unexecuted_blocks=1 00:19:06.042 00:19:06.042 ' 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:06.042 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:06.042 --rc genhtml_branch_coverage=1 00:19:06.042 --rc genhtml_function_coverage=1 00:19:06.042 --rc genhtml_legend=1 00:19:06.042 --rc geninfo_all_blocks=1 00:19:06.042 --rc geninfo_unexecuted_blocks=1 00:19:06.042 00:19:06.042 ' 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:06.042 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@50 -- # : 0 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:06.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@296 -- # prepare_net_devs 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # local -g is_hw=no 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@260 -- # remove_target_ns 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # xtrace_disable 00:19:06.043 09:03:21 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:12.615 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:12.615 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@131 -- # pci_devs=() 00:19:12.615 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@131 -- # local -a pci_devs 00:19:12.615 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@132 -- # pci_net_devs=() 00:19:12.615 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:19:12.615 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@133 -- # pci_drivers=() 00:19:12.615 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@133 -- # local -A pci_drivers 00:19:12.615 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@135 -- # net_devs=() 00:19:12.615 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@135 -- # local -ga net_devs 00:19:12.615 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@136 -- # e810=() 00:19:12.615 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@136 -- # local -ga e810 00:19:12.615 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@137 -- # x722=() 00:19:12.615 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@137 -- # local -ga x722 00:19:12.615 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@138 -- # mlx=() 00:19:12.615 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@138 -- # local -ga mlx 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:12.616 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:12.616 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:12.616 Found net devices under 0000:86:00.0: cvl_0_0 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:12.616 Found net devices under 0000:86:00.1: cvl_0_1 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # is_hw=yes 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@257 -- # create_target_ns 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@28 -- # local -g _dev 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # ips=() 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:12.616 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772161 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:19:12.617 10.0.0.1 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772162 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:19:12.617 10.0.0.2 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@38 -- # ping_ips 1 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=initiator0 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:12.617 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:12.617 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.447 ms 00:19:12.617 00:19:12.617 --- 10.0.0.1 ping statistics --- 00:19:12.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.617 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev target0 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=target0 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:19:12.617 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:12.617 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:19:12.617 00:19:12.617 --- 10.0.0.2 ping statistics --- 00:19:12.617 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:12.617 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # (( pair++ )) 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@270 -- # return 0 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:19:12.617 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=initiator0 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=initiator1 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # return 1 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev= 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@169 -- # return 0 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev target0 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=target0 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev target1 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=target1 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # return 1 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev= 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@169 -- # return 0 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # nvmfpid=2370015 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@329 -- # waitforlisten 2370015 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 2370015 ']' 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:12.618 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:12.618 09:03:27 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:12.618 [2024-11-20 09:03:27.992532] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:19:12.618 [2024-11-20 09:03:27.992587] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:12.618 [2024-11-20 09:03:28.073898] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.618 [2024-11-20 09:03:28.113280] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:12.618 [2024-11-20 09:03:28.113317] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:12.618 [2024-11-20 09:03:28.113325] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:12.618 [2024-11-20 09:03:28.113331] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:12.618 [2024-11-20 09:03:28.113337] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:12.618 [2024-11-20 09:03:28.113895] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:12.618 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:12.618 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:19:12.618 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:12.618 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:12.618 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:12.618 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:12.618 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:12.618 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:12.618 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:19:12.618 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.619 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:12.619 [2024-11-20 09:03:28.258333] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:12.619 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.619 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:19:12.619 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.619 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:12.619 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.619 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:12.619 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.619 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:12.619 Malloc0 00:19:12.619 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.619 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:12.619 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.619 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:12.619 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.619 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:12.619 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:12.619 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:12.619 [2024-11-20 09:03:28.298834] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:12.619 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:12.619 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=2370079 00:19:12.619 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:12.619 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=2370081 00:19:12.619 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:12.619 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=2370083 00:19:12.619 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 2370079 00:19:12.619 09:03:28 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:12.619 [2024-11-20 09:03:28.397543] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:12.619 [2024-11-20 09:03:28.397733] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:12.619 [2024-11-20 09:03:28.397889] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:13.554 Initializing NVMe Controllers 00:19:13.554 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:13.554 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:19:13.554 Initialization complete. Launching workers. 00:19:13.554 ======================================================== 00:19:13.554 Latency(us) 00:19:13.554 Device Information : IOPS MiB/s Average min max 00:19:13.555 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 25.00 0.10 40886.69 40665.02 40969.46 00:19:13.555 ======================================================== 00:19:13.555 Total : 25.00 0.10 40886.69 40665.02 40969.46 00:19:13.555 00:19:13.555 Initializing NVMe Controllers 00:19:13.555 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:13.555 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:19:13.555 Initialization complete. Launching workers. 00:19:13.555 ======================================================== 00:19:13.555 Latency(us) 00:19:13.555 Device Information : IOPS MiB/s Average min max 00:19:13.555 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 3875.00 15.14 257.69 149.78 532.11 00:19:13.555 ======================================================== 00:19:13.555 Total : 3875.00 15.14 257.69 149.78 532.11 00:19:13.555 00:19:13.814 Initializing NVMe Controllers 00:19:13.814 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:13.814 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:19:13.814 Initialization complete. Launching workers. 00:19:13.814 ======================================================== 00:19:13.814 Latency(us) 00:19:13.814 Device Information : IOPS MiB/s Average min max 00:19:13.814 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 4922.00 19.23 202.81 121.90 436.15 00:19:13.814 ======================================================== 00:19:13.814 Total : 4922.00 19.23 202.81 121.90 436.15 00:19:13.814 00:19:13.814 [2024-11-20 09:03:29.621751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11743c0 is same with the state(6) to be set 00:19:13.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 2370081 00:19:13.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 2370083 00:19:13.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:13.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:19:13.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@335 -- # nvmfcleanup 00:19:13.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@99 -- # sync 00:19:13.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:13.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@102 -- # set +e 00:19:13.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:13.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:13.814 rmmod nvme_tcp 00:19:13.814 rmmod nvme_fabrics 00:19:13.814 rmmod nvme_keyring 00:19:13.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:13.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@106 -- # set -e 00:19:13.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@107 -- # return 0 00:19:13.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # '[' -n 2370015 ']' 00:19:13.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@337 -- # killprocess 2370015 00:19:13.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 2370015 ']' 00:19:13.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 2370015 00:19:13.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:19:13.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:13.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2370015 00:19:13.814 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:13.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:13.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2370015' 00:19:13.815 killing process with pid 2370015 00:19:13.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 2370015 00:19:13.815 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 2370015 00:19:14.074 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:19:14.074 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@342 -- # nvmf_fini 00:19:14.074 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@264 -- # local dev 00:19:14.074 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@267 -- # remove_target_ns 00:19:14.074 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:14.074 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:14.074 09:03:29 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:15.981 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@268 -- # delete_main_bridge 00:19:15.981 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:15.981 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@130 -- # return 0 00:19:15.981 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:15.981 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:19:15.981 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:19:15.981 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:19:15.981 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:19:15.981 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:19:15.981 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:19:15.981 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:19:15.981 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:15.981 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:19:15.981 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:19:15.981 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:19:15.981 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:19:15.981 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:19:15.981 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:19:15.981 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:19:15.981 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:19:15.981 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@41 -- # _dev=0 00:19:15.981 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@41 -- # dev_map=() 00:19:15.981 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@284 -- # iptr 00:19:15.981 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@542 -- # iptables-save 00:19:15.981 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:19:15.981 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@542 -- # iptables-restore 00:19:15.981 00:19:15.981 real 0m10.320s 00:19:15.981 user 0m6.790s 00:19:15.981 sys 0m5.639s 00:19:15.981 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:15.981 09:03:31 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:19:15.981 ************************************ 00:19:15.981 END TEST nvmf_control_msg_list 00:19:15.981 ************************************ 00:19:16.241 09:03:32 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:16.241 09:03:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:16.241 09:03:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:16.241 09:03:32 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:16.241 ************************************ 00:19:16.241 START TEST nvmf_wait_for_buf 00:19:16.241 ************************************ 00:19:16.241 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:19:16.241 * Looking for test storage... 00:19:16.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:16.241 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:16.241 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:19:16.241 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:16.241 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:16.241 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:16.241 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:16.241 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:16.241 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:19:16.241 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:19:16.241 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:19:16.241 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:19:16.241 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:19:16.241 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:19:16.241 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:19:16.241 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:16.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.242 --rc genhtml_branch_coverage=1 00:19:16.242 --rc genhtml_function_coverage=1 00:19:16.242 --rc genhtml_legend=1 00:19:16.242 --rc geninfo_all_blocks=1 00:19:16.242 --rc geninfo_unexecuted_blocks=1 00:19:16.242 00:19:16.242 ' 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:16.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.242 --rc genhtml_branch_coverage=1 00:19:16.242 --rc genhtml_function_coverage=1 00:19:16.242 --rc genhtml_legend=1 00:19:16.242 --rc geninfo_all_blocks=1 00:19:16.242 --rc geninfo_unexecuted_blocks=1 00:19:16.242 00:19:16.242 ' 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:16.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.242 --rc genhtml_branch_coverage=1 00:19:16.242 --rc genhtml_function_coverage=1 00:19:16.242 --rc genhtml_legend=1 00:19:16.242 --rc geninfo_all_blocks=1 00:19:16.242 --rc geninfo_unexecuted_blocks=1 00:19:16.242 00:19:16.242 ' 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:16.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:16.242 --rc genhtml_branch_coverage=1 00:19:16.242 --rc genhtml_function_coverage=1 00:19:16.242 --rc genhtml_legend=1 00:19:16.242 --rc geninfo_all_blocks=1 00:19:16.242 --rc geninfo_unexecuted_blocks=1 00:19:16.242 00:19:16.242 ' 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@50 -- # : 0 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:16.242 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@296 -- # prepare_net_devs 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@260 -- # remove_target_ns 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # xtrace_disable 00:19:16.242 09:03:32 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@131 -- # pci_devs=() 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@131 -- # local -a pci_devs 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@132 -- # pci_net_devs=() 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@133 -- # pci_drivers=() 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@133 -- # local -A pci_drivers 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@135 -- # net_devs=() 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@135 -- # local -ga net_devs 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@136 -- # e810=() 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@136 -- # local -ga e810 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@137 -- # x722=() 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@137 -- # local -ga x722 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@138 -- # mlx=() 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@138 -- # local -ga mlx 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:22.813 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:22.813 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:22.814 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:22.814 Found net devices under 0000:86:00.0: cvl_0_0 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:22.814 Found net devices under 0000:86:00.1: cvl_0_1 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # is_hw=yes 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@257 -- # create_target_ns 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@28 -- # local -g _dev 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # ips=() 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:19:22.814 09:03:37 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:19:22.814 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:19:22.814 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:19:22.814 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:22.814 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:19:22.814 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772161 00:19:22.814 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:22.814 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:19:22.814 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:19:22.814 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:19:22.814 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:19:22.814 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:19:22.814 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:19:22.814 10.0.0.1 00:19:22.814 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:19:22.814 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:22.814 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:22.814 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:22.814 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:19:22.814 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772162 00:19:22.814 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:22.814 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:19:22.814 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:19:22.814 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:19:22.814 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:19:22.814 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:19:22.814 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:19:22.814 10.0.0.2 00:19:22.814 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:19:22.814 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:19:22.814 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:22.814 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@38 -- # ping_ips 1 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=initiator0 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:22.815 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:22.815 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.372 ms 00:19:22.815 00:19:22.815 --- 10.0.0.1 ping statistics --- 00:19:22.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.815 rtt min/avg/max/mdev = 0.372/0.372/0.372/0.000 ms 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=target0 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:19:22.815 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:22.815 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:19:22.815 00:19:22.815 --- 10.0.0.2 ping statistics --- 00:19:22.815 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:22.815 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # (( pair++ )) 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@270 -- # return 0 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=initiator0 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=initiator1 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # return 1 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev= 00:19:22.815 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@169 -- # return 0 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=target0 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev target1 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=target1 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # return 1 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev= 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@169 -- # return 0 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # nvmfpid=2373882 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@329 -- # waitforlisten 2373882 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 2373882 ']' 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:22.816 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:22.816 [2024-11-20 09:03:38.408708] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:19:22.816 [2024-11-20 09:03:38.408755] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:22.816 [2024-11-20 09:03:38.487318] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:22.816 [2024-11-20 09:03:38.528151] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:22.816 [2024-11-20 09:03:38.528190] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:22.816 [2024-11-20 09:03:38.528197] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:22.816 [2024-11-20 09:03:38.528203] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:22.816 [2024-11-20 09:03:38.528212] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:22.816 [2024-11-20 09:03:38.528787] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:22.816 Malloc0 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:22.816 [2024-11-20 09:03:38.702260] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:22.816 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:22.816 [2024-11-20 09:03:38.730433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:22.817 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:22.817 09:03:38 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:19:22.817 [2024-11-20 09:03:38.823031] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:19:24.720 Initializing NVMe Controllers 00:19:24.720 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:19:24.720 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:19:24.720 Initialization complete. Launching workers. 00:19:24.720 ======================================================== 00:19:24.720 Latency(us) 00:19:24.720 Device Information : IOPS MiB/s Average min max 00:19:24.720 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 28.00 3.50 151372.85 7274.76 191534.81 00:19:24.720 ======================================================== 00:19:24.720 Total : 28.00 3.50 151372.85 7274.76 191534.81 00:19:24.720 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=422 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 422 -eq 0 ]] 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@335 -- # nvmfcleanup 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@99 -- # sync 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@102 -- # set +e 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:24.720 rmmod nvme_tcp 00:19:24.720 rmmod nvme_fabrics 00:19:24.720 rmmod nvme_keyring 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@106 -- # set -e 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@107 -- # return 0 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # '[' -n 2373882 ']' 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@337 -- # killprocess 2373882 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 2373882 ']' 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 2373882 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2373882 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2373882' 00:19:24.720 killing process with pid 2373882 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 2373882 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 2373882 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@342 -- # nvmf_fini 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@264 -- # local dev 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@267 -- # remove_target_ns 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:24.720 09:03:40 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@268 -- # delete_main_bridge 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@130 -- # return 0 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@41 -- # _dev=0 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@41 -- # dev_map=() 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@284 -- # iptr 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@542 -- # iptables-save 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@542 -- # iptables-restore 00:19:26.770 00:19:26.770 real 0m10.677s 00:19:26.770 user 0m4.195s 00:19:26.770 sys 0m4.944s 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:19:26.770 ************************************ 00:19:26.770 END TEST nvmf_wait_for_buf 00:19:26.770 ************************************ 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@125 -- # xtrace_disable 00:19:26.770 09:03:42 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:33.340 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:33.340 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@131 -- # pci_devs=() 00:19:33.340 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@131 -- # local -a pci_devs 00:19:33.340 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@132 -- # pci_net_devs=() 00:19:33.340 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:19:33.340 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@133 -- # pci_drivers=() 00:19:33.340 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@133 -- # local -A pci_drivers 00:19:33.340 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@135 -- # net_devs=() 00:19:33.340 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@135 -- # local -ga net_devs 00:19:33.340 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@136 -- # e810=() 00:19:33.340 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@136 -- # local -ga e810 00:19:33.340 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@137 -- # x722=() 00:19:33.340 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@137 -- # local -ga x722 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@138 -- # mlx=() 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@138 -- # local -ga mlx 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:33.341 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:33.341 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:33.341 Found net devices under 0000:86:00.0: cvl_0_0 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:33.341 Found net devices under 0000:86:00.1: cvl_0_1 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:33.341 ************************************ 00:19:33.341 START TEST nvmf_perf_adq 00:19:33.341 ************************************ 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:19:33.341 * Looking for test storage... 00:19:33.341 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:33.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.341 --rc genhtml_branch_coverage=1 00:19:33.341 --rc genhtml_function_coverage=1 00:19:33.341 --rc genhtml_legend=1 00:19:33.341 --rc geninfo_all_blocks=1 00:19:33.341 --rc geninfo_unexecuted_blocks=1 00:19:33.341 00:19:33.341 ' 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:33.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.341 --rc genhtml_branch_coverage=1 00:19:33.341 --rc genhtml_function_coverage=1 00:19:33.341 --rc genhtml_legend=1 00:19:33.341 --rc geninfo_all_blocks=1 00:19:33.341 --rc geninfo_unexecuted_blocks=1 00:19:33.341 00:19:33.341 ' 00:19:33.341 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:33.341 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.341 --rc genhtml_branch_coverage=1 00:19:33.342 --rc genhtml_function_coverage=1 00:19:33.342 --rc genhtml_legend=1 00:19:33.342 --rc geninfo_all_blocks=1 00:19:33.342 --rc geninfo_unexecuted_blocks=1 00:19:33.342 00:19:33.342 ' 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:33.342 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:33.342 --rc genhtml_branch_coverage=1 00:19:33.342 --rc genhtml_function_coverage=1 00:19:33.342 --rc genhtml_legend=1 00:19:33.342 --rc geninfo_all_blocks=1 00:19:33.342 --rc geninfo_unexecuted_blocks=1 00:19:33.342 00:19:33.342 ' 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@50 -- # : 0 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:33.342 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # xtrace_disable 00:19:33.342 09:03:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:38.617 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:38.617 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # pci_devs=() 00:19:38.617 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # local -a pci_devs 00:19:38.617 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # pci_net_devs=() 00:19:38.617 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # pci_drivers=() 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # local -A pci_drivers 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # net_devs=() 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # local -ga net_devs 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # e810=() 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # local -ga e810 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # x722=() 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # local -ga x722 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # mlx=() 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # local -ga mlx 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:38.618 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:38.618 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:38.618 Found net devices under 0000:86:00.0: cvl_0_0 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:38.618 Found net devices under 0000:86:00.1: cvl_0_1 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:19:38.618 09:03:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:19:39.553 09:03:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:19:41.451 09:03:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # prepare_net_devs 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # local -g is_hw=no 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # remove_target_ns 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # xtrace_disable 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # pci_devs=() 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # local -a pci_devs 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # pci_net_devs=() 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # pci_drivers=() 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # local -A pci_drivers 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # net_devs=() 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # local -ga net_devs 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # e810=() 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # local -ga e810 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # x722=() 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # local -ga x722 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # mlx=() 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # local -ga mlx 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:46.735 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:46.735 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:46.736 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:46.736 Found net devices under 0000:86:00.0: cvl_0_0 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:46.736 Found net devices under 0000:86:00.1: cvl_0_1 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # is_hw=yes 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@257 -- # create_target_ns 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@28 -- # local -g _dev 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # ips=() 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772161 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:19:46.736 10.0.0.1 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772162 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:19:46.736 10.0.0.2 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:46.736 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@38 -- # ping_ips 1 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=initiator0 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:46.737 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:46.737 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:19:46.737 00:19:46.737 --- 10.0.0.1 ping statistics --- 00:19:46.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.737 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev target0 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=target0 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:19:46.737 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:46.737 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:19:46.737 00:19:46.737 --- 10.0.0.2 ping statistics --- 00:19:46.737 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:46.737 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair++ )) 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # return 0 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=initiator0 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=initiator1 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # return 1 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev= 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@169 -- # return 0 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:19:46.737 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev target0 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=target0 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev target1 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=target1 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # return 1 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev= 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@169 -- # return 0 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # nvmfpid=2382254 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # waitforlisten 2382254 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2382254 ']' 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:46.738 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:46.738 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:46.738 [2024-11-20 09:04:02.764355] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:19:46.738 [2024-11-20 09:04:02.764399] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:46.996 [2024-11-20 09:04:02.842197] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:46.996 [2024-11-20 09:04:02.883700] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:46.996 [2024-11-20 09:04:02.883744] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:46.996 [2024-11-20 09:04:02.883753] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:46.996 [2024-11-20 09:04:02.883759] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:46.996 [2024-11-20 09:04:02.883764] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:46.996 [2024-11-20 09:04:02.885371] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:46.996 [2024-11-20 09:04:02.885479] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:46.996 [2024-11-20 09:04:02.885589] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:46.996 [2024-11-20 09:04:02.885590] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:19:46.996 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:46.996 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:19:46.996 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:46.996 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:46.996 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:46.996 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:46.996 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:19:46.996 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:46.996 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:46.996 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.996 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:46.996 09:04:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.996 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:46.996 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:46.996 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.996 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:46.996 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:46.996 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:46.996 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:46.996 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:47.254 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.254 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:47.254 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.254 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:47.254 [2024-11-20 09:04:03.099300] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:47.254 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.254 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:47.254 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.254 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:47.254 Malloc1 00:19:47.254 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.254 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:47.254 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.254 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:47.254 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.254 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:47.254 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.254 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:47.254 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.254 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:47.254 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:47.254 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:47.254 [2024-11-20 09:04:03.161382] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:47.254 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:47.254 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=2382430 00:19:47.254 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:19:47.254 09:04:03 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:49.154 09:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:19:49.154 09:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:49.154 09:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:49.412 09:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:49.412 09:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:19:49.412 "tick_rate": 2300000000, 00:19:49.412 "poll_groups": [ 00:19:49.412 { 00:19:49.412 "name": "nvmf_tgt_poll_group_000", 00:19:49.412 "admin_qpairs": 1, 00:19:49.412 "io_qpairs": 1, 00:19:49.412 "current_admin_qpairs": 1, 00:19:49.412 "current_io_qpairs": 1, 00:19:49.412 "pending_bdev_io": 0, 00:19:49.412 "completed_nvme_io": 18673, 00:19:49.412 "transports": [ 00:19:49.412 { 00:19:49.412 "trtype": "TCP" 00:19:49.412 } 00:19:49.412 ] 00:19:49.412 }, 00:19:49.412 { 00:19:49.412 "name": "nvmf_tgt_poll_group_001", 00:19:49.412 "admin_qpairs": 0, 00:19:49.412 "io_qpairs": 1, 00:19:49.412 "current_admin_qpairs": 0, 00:19:49.413 "current_io_qpairs": 1, 00:19:49.413 "pending_bdev_io": 0, 00:19:49.413 "completed_nvme_io": 19063, 00:19:49.413 "transports": [ 00:19:49.413 { 00:19:49.413 "trtype": "TCP" 00:19:49.413 } 00:19:49.413 ] 00:19:49.413 }, 00:19:49.413 { 00:19:49.413 "name": "nvmf_tgt_poll_group_002", 00:19:49.413 "admin_qpairs": 0, 00:19:49.413 "io_qpairs": 1, 00:19:49.413 "current_admin_qpairs": 0, 00:19:49.413 "current_io_qpairs": 1, 00:19:49.413 "pending_bdev_io": 0, 00:19:49.413 "completed_nvme_io": 18799, 00:19:49.413 "transports": [ 00:19:49.413 { 00:19:49.413 "trtype": "TCP" 00:19:49.413 } 00:19:49.413 ] 00:19:49.413 }, 00:19:49.413 { 00:19:49.413 "name": "nvmf_tgt_poll_group_003", 00:19:49.413 "admin_qpairs": 0, 00:19:49.413 "io_qpairs": 1, 00:19:49.413 "current_admin_qpairs": 0, 00:19:49.413 "current_io_qpairs": 1, 00:19:49.413 "pending_bdev_io": 0, 00:19:49.413 "completed_nvme_io": 18548, 00:19:49.413 "transports": [ 00:19:49.413 { 00:19:49.413 "trtype": "TCP" 00:19:49.413 } 00:19:49.413 ] 00:19:49.413 } 00:19:49.413 ] 00:19:49.413 }' 00:19:49.413 09:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:49.413 09:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:19:49.413 09:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:19:49.413 09:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:19:49.413 09:04:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 2382430 00:19:57.526 Initializing NVMe Controllers 00:19:57.526 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:57.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:57.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:57.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:57.526 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:57.526 Initialization complete. Launching workers. 00:19:57.526 ======================================================== 00:19:57.526 Latency(us) 00:19:57.526 Device Information : IOPS MiB/s Average min max 00:19:57.526 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10199.26 39.84 6273.86 2437.52 10534.42 00:19:57.526 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10363.26 40.48 6174.92 2384.84 10267.60 00:19:57.526 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10299.16 40.23 6212.77 1847.80 10131.24 00:19:57.526 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10257.16 40.07 6240.34 2625.55 10769.42 00:19:57.526 ======================================================== 00:19:57.526 Total : 41118.84 160.62 6225.26 1847.80 10769.42 00:19:57.526 00:19:57.526 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:19:57.526 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # nvmfcleanup 00:19:57.526 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@99 -- # sync 00:19:57.526 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:57.526 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@102 -- # set +e 00:19:57.526 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:57.526 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:57.526 rmmod nvme_tcp 00:19:57.526 rmmod nvme_fabrics 00:19:57.526 rmmod nvme_keyring 00:19:57.526 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:57.526 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@106 -- # set -e 00:19:57.526 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@107 -- # return 0 00:19:57.526 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # '[' -n 2382254 ']' 00:19:57.527 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@337 -- # killprocess 2382254 00:19:57.527 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2382254 ']' 00:19:57.527 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2382254 00:19:57.527 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:19:57.527 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:57.527 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2382254 00:19:57.527 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:57.527 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:57.527 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2382254' 00:19:57.527 killing process with pid 2382254 00:19:57.527 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2382254 00:19:57.527 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2382254 00:19:57.786 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:19:57.786 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # nvmf_fini 00:19:57.786 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@264 -- # local dev 00:19:57.786 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@267 -- # remove_target_ns 00:19:57.786 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:57.786 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:57.786 09:04:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:00.318 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@268 -- # delete_main_bridge 00:20:00.318 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:00.318 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@130 -- # return 0 00:20:00.318 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:00.318 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:20:00.318 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:20:00.318 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:20:00.318 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:20:00.318 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:20:00.318 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:20:00.318 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:20:00.318 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:00.318 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:20:00.318 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:20:00.318 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:20:00.318 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:20:00.318 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:20:00.318 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:20:00.318 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:20:00.318 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:20:00.318 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # _dev=0 00:20:00.318 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # dev_map=() 00:20:00.318 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@284 -- # iptr 00:20:00.318 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@542 -- # iptables-save 00:20:00.318 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:20:00.318 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@542 -- # iptables-restore 00:20:00.318 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:00.318 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:00.318 09:04:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:00.883 09:04:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:02.784 09:04:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:08.055 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:20:08.055 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:20:08.055 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:08.055 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # prepare_net_devs 00:20:08.055 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # local -g is_hw=no 00:20:08.055 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # remove_target_ns 00:20:08.055 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:08.055 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:08.055 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # xtrace_disable 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # pci_devs=() 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # local -a pci_devs 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # pci_net_devs=() 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # pci_drivers=() 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # local -A pci_drivers 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # net_devs=() 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # local -ga net_devs 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # e810=() 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # local -ga e810 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # x722=() 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # local -ga x722 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # mlx=() 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # local -ga mlx 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:08.056 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:08.056 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:08.056 Found net devices under 0000:86:00.0: cvl_0_0 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:08.056 Found net devices under 0000:86:00.1: cvl_0_1 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # is_hw=yes 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@257 -- # create_target_ns 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:20:08.056 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@27 -- # local -gA dev_map 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@28 -- # local -g _dev 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # ips=() 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772161 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:20:08.057 10.0.0.1 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772162 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:20:08.057 10.0.0.2 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:20:08.057 09:04:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:20:08.057 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:20:08.057 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:20:08.057 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:20:08.057 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:20:08.057 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:20:08.057 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:20:08.057 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:20:08.057 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:08.057 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:08.057 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@38 -- # ping_ips 1 00:20:08.057 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:20:08.057 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:20:08.057 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:20:08.057 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:20:08.057 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:20:08.057 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:20:08.057 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:20:08.057 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:08.057 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:20:08.057 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=initiator0 00:20:08.057 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:20:08.057 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:20:08.057 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:20:08.057 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:20:08.058 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:08.058 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.302 ms 00:20:08.058 00:20:08.058 --- 10.0.0.1 ping statistics --- 00:20:08.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.058 rtt min/avg/max/mdev = 0.302/0.302/0.302/0.000 ms 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev target0 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=target0 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:20:08.058 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:08.058 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:20:08.058 00:20:08.058 --- 10.0.0.2 ping statistics --- 00:20:08.058 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.058 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair++ )) 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # return 0 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=initiator0 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=initiator1 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # return 1 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev= 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@169 -- # return 0 00:20:08.058 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev target0 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=target0 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev target1 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=target1 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # return 1 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev= 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@169 -- # return 0 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec nvmf_ns_spdk ethtool --offload cvl_0_1 hw-tc-offload on 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec nvmf_ns_spdk ethtool --set-priv-flags cvl_0_1 channel-pkt-inspect-optimize off 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:20:08.318 net.core.busy_poll = 1 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:20:08.318 net.core.busy_read = 1 00:20:08.318 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:20:08.319 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec nvmf_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_1 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:20:08.319 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec nvmf_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_1 ingress 00:20:08.319 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec nvmf_ns_spdk /usr/sbin/tc filter add dev cvl_0_1 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:20:08.319 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_1 00:20:08.577 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:08.577 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:08.577 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:08.577 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:08.577 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # nvmfpid=2386240 00:20:08.577 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:08.577 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # waitforlisten 2386240 00:20:08.577 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 2386240 ']' 00:20:08.577 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.577 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:08.577 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.577 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.577 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:08.577 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:08.577 [2024-11-20 09:04:24.448311] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:20:08.577 [2024-11-20 09:04:24.448366] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.577 [2024-11-20 09:04:24.527474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:08.577 [2024-11-20 09:04:24.570599] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:08.577 [2024-11-20 09:04:24.570637] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:08.577 [2024-11-20 09:04:24.570644] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:08.577 [2024-11-20 09:04:24.570649] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:08.577 [2024-11-20 09:04:24.570654] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:08.577 [2024-11-20 09:04:24.572269] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:08.577 [2024-11-20 09:04:24.572387] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:08.577 [2024-11-20 09:04:24.572498] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:08.577 [2024-11-20 09:04:24.572499] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:08.577 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:08.577 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:08.577 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:08.577 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:08.577 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:08.837 [2024-11-20 09:04:24.770290] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:08.837 Malloc1 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:08.837 [2024-11-20 09:04:24.832286] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=2386269 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:20:08.837 09:04:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:11.367 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:20:11.367 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:11.367 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:11.367 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:11.367 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:20:11.367 "tick_rate": 2300000000, 00:20:11.367 "poll_groups": [ 00:20:11.367 { 00:20:11.367 "name": "nvmf_tgt_poll_group_000", 00:20:11.367 "admin_qpairs": 1, 00:20:11.367 "io_qpairs": 1, 00:20:11.367 "current_admin_qpairs": 1, 00:20:11.367 "current_io_qpairs": 1, 00:20:11.367 "pending_bdev_io": 0, 00:20:11.367 "completed_nvme_io": 27373, 00:20:11.367 "transports": [ 00:20:11.367 { 00:20:11.367 "trtype": "TCP" 00:20:11.367 } 00:20:11.367 ] 00:20:11.367 }, 00:20:11.367 { 00:20:11.367 "name": "nvmf_tgt_poll_group_001", 00:20:11.367 "admin_qpairs": 0, 00:20:11.367 "io_qpairs": 3, 00:20:11.367 "current_admin_qpairs": 0, 00:20:11.367 "current_io_qpairs": 3, 00:20:11.367 "pending_bdev_io": 0, 00:20:11.367 "completed_nvme_io": 29009, 00:20:11.367 "transports": [ 00:20:11.367 { 00:20:11.367 "trtype": "TCP" 00:20:11.367 } 00:20:11.367 ] 00:20:11.367 }, 00:20:11.367 { 00:20:11.367 "name": "nvmf_tgt_poll_group_002", 00:20:11.367 "admin_qpairs": 0, 00:20:11.367 "io_qpairs": 0, 00:20:11.367 "current_admin_qpairs": 0, 00:20:11.367 "current_io_qpairs": 0, 00:20:11.367 "pending_bdev_io": 0, 00:20:11.367 "completed_nvme_io": 0, 00:20:11.367 "transports": [ 00:20:11.367 { 00:20:11.367 "trtype": "TCP" 00:20:11.367 } 00:20:11.367 ] 00:20:11.367 }, 00:20:11.367 { 00:20:11.367 "name": "nvmf_tgt_poll_group_003", 00:20:11.367 "admin_qpairs": 0, 00:20:11.367 "io_qpairs": 0, 00:20:11.367 "current_admin_qpairs": 0, 00:20:11.367 "current_io_qpairs": 0, 00:20:11.367 "pending_bdev_io": 0, 00:20:11.367 "completed_nvme_io": 0, 00:20:11.367 "transports": [ 00:20:11.367 { 00:20:11.367 "trtype": "TCP" 00:20:11.367 } 00:20:11.367 ] 00:20:11.367 } 00:20:11.367 ] 00:20:11.367 }' 00:20:11.367 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:20:11.367 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:20:11.367 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:20:11.367 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:20:11.367 09:04:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 2386269 00:20:19.484 Initializing NVMe Controllers 00:20:19.484 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:19.484 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:19.484 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:19.484 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:19.484 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:19.484 Initialization complete. Launching workers. 00:20:19.484 ======================================================== 00:20:19.484 Latency(us) 00:20:19.484 Device Information : IOPS MiB/s Average min max 00:20:19.484 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 15066.60 58.85 4247.19 1878.36 6129.47 00:20:19.484 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5291.20 20.67 12093.73 1904.00 58806.82 00:20:19.484 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 4616.10 18.03 13902.88 1886.75 58535.48 00:20:19.484 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5204.10 20.33 12332.85 1548.71 58516.51 00:20:19.484 ======================================================== 00:20:19.484 Total : 30178.00 117.88 8494.25 1548.71 58806.82 00:20:19.484 00:20:19.484 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:20:19.484 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # nvmfcleanup 00:20:19.484 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@99 -- # sync 00:20:19.484 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:20:19.484 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@102 -- # set +e 00:20:19.484 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@103 -- # for i in {1..20} 00:20:19.484 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:20:19.484 rmmod nvme_tcp 00:20:19.484 rmmod nvme_fabrics 00:20:19.484 rmmod nvme_keyring 00:20:19.484 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:20:19.484 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@106 -- # set -e 00:20:19.484 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@107 -- # return 0 00:20:19.484 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # '[' -n 2386240 ']' 00:20:19.484 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@337 -- # killprocess 2386240 00:20:19.484 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 2386240 ']' 00:20:19.484 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 2386240 00:20:19.484 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:19.484 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:19.484 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2386240 00:20:19.484 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:19.484 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:19.484 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2386240' 00:20:19.484 killing process with pid 2386240 00:20:19.484 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 2386240 00:20:19.484 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 2386240 00:20:19.484 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:20:19.484 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # nvmf_fini 00:20:19.484 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@264 -- # local dev 00:20:19.484 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@267 -- # remove_target_ns 00:20:19.484 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:19.484 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:19.484 09:04:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@268 -- # delete_main_bridge 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@130 -- # return 0 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # _dev=0 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # dev_map=() 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@284 -- # iptr 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@542 -- # iptables-save 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@542 -- # iptables-restore 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:20:22.777 00:20:22.777 real 0m49.946s 00:20:22.777 user 2m44.188s 00:20:22.777 sys 0m10.537s 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:22.777 ************************************ 00:20:22.777 END TEST nvmf_perf_adq 00:20:22.777 ************************************ 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:22.777 ************************************ 00:20:22.777 START TEST nvmf_shutdown 00:20:22.777 ************************************ 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:20:22.777 * Looking for test storage... 00:20:22.777 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:22.777 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:22.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.778 --rc genhtml_branch_coverage=1 00:20:22.778 --rc genhtml_function_coverage=1 00:20:22.778 --rc genhtml_legend=1 00:20:22.778 --rc geninfo_all_blocks=1 00:20:22.778 --rc geninfo_unexecuted_blocks=1 00:20:22.778 00:20:22.778 ' 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:22.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.778 --rc genhtml_branch_coverage=1 00:20:22.778 --rc genhtml_function_coverage=1 00:20:22.778 --rc genhtml_legend=1 00:20:22.778 --rc geninfo_all_blocks=1 00:20:22.778 --rc geninfo_unexecuted_blocks=1 00:20:22.778 00:20:22.778 ' 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:22.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.778 --rc genhtml_branch_coverage=1 00:20:22.778 --rc genhtml_function_coverage=1 00:20:22.778 --rc genhtml_legend=1 00:20:22.778 --rc geninfo_all_blocks=1 00:20:22.778 --rc geninfo_unexecuted_blocks=1 00:20:22.778 00:20:22.778 ' 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:22.778 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:22.778 --rc genhtml_branch_coverage=1 00:20:22.778 --rc genhtml_function_coverage=1 00:20:22.778 --rc genhtml_legend=1 00:20:22.778 --rc geninfo_all_blocks=1 00:20:22.778 --rc geninfo_unexecuted_blocks=1 00:20:22.778 00:20:22.778 ' 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@50 -- # : 0 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:20:22.778 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@54 -- # have_pci_nics=0 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:22.778 ************************************ 00:20:22.778 START TEST nvmf_shutdown_tc1 00:20:22.778 ************************************ 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # prepare_net_devs 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # remove_target_ns 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:20:22.778 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:20:22.779 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # xtrace_disable 00:20:22.779 09:04:38 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@131 -- # pci_devs=() 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@131 -- # local -a pci_devs 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@133 -- # pci_drivers=() 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@135 -- # net_devs=() 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@135 -- # local -ga net_devs 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@136 -- # e810=() 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@136 -- # local -ga e810 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@137 -- # x722=() 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@137 -- # local -ga x722 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@138 -- # mlx=() 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@138 -- # local -ga mlx 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:29.349 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:29.349 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:29.349 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:29.350 Found net devices under 0000:86:00.0: cvl_0_0 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:29.350 Found net devices under 0000:86:00.1: cvl_0_1 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # is_hw=yes 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@257 -- # create_target_ns 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@28 -- # local -g _dev 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@44 -- # ips=() 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@11 -- # local val=167772161 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:20:29.350 10.0.0.1 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@11 -- # local val=167772162 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:20:29.350 10.0.0.2 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:29.350 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@38 -- # ping_ips 1 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:20:29.351 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:29.351 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.438 ms 00:20:29.351 00:20:29.351 --- 10.0.0.1 ping statistics --- 00:20:29.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.351 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@107 -- # local dev=target0 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:20:29.351 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:29.351 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:20:29.351 00:20:29.351 --- 10.0.0.2 ping statistics --- 00:20:29.351 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:29.351 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # (( pair++ )) 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # return 0 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@107 -- # local dev=initiator1 00:20:29.351 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # return 1 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # dev= 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@169 -- # return 0 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@107 -- # local dev=target0 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # get_net_dev target1 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@107 -- # local dev=target1 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # return 1 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # dev= 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@169 -- # return 0 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # nvmfpid=2391743 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # waitforlisten 2391743 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2391743 ']' 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:29.352 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:29.352 09:04:44 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:29.352 [2024-11-20 09:04:44.939462] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:20:29.352 [2024-11-20 09:04:44.939506] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:29.352 [2024-11-20 09:04:45.020102] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:29.352 [2024-11-20 09:04:45.063673] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:29.352 [2024-11-20 09:04:45.063709] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:29.352 [2024-11-20 09:04:45.063717] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:29.352 [2024-11-20 09:04:45.063723] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:29.352 [2024-11-20 09:04:45.063728] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:29.352 [2024-11-20 09:04:45.065211] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:29.352 [2024-11-20 09:04:45.065319] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:29.352 [2024-11-20 09:04:45.065426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:29.352 [2024-11-20 09:04:45.065426] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:29.920 [2024-11-20 09:04:45.828092] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.920 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:29.921 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:29.921 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:20:29.921 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:29.921 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:29.921 09:04:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:29.921 Malloc1 00:20:29.921 [2024-11-20 09:04:45.936425] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:29.921 Malloc2 00:20:30.179 Malloc3 00:20:30.179 Malloc4 00:20:30.179 Malloc5 00:20:30.179 Malloc6 00:20:30.179 Malloc7 00:20:30.439 Malloc8 00:20:30.439 Malloc9 00:20:30.439 Malloc10 00:20:30.439 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:30.439 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:30.439 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:30.439 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:30.439 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=2392020 00:20:30.439 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 2392020 /var/tmp/bdevperf.sock 00:20:30.439 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 2392020 ']' 00:20:30.439 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:30.439 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:20:30.439 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:30.439 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:30.439 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:30.439 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:30.439 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # config=() 00:20:30.439 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:30.439 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # local subsystem config 00:20:30.439 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:30.439 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:30.439 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:30.439 { 00:20:30.439 "params": { 00:20:30.439 "name": "Nvme$subsystem", 00:20:30.439 "trtype": "$TEST_TRANSPORT", 00:20:30.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.439 "adrfam": "ipv4", 00:20:30.439 "trsvcid": "$NVMF_PORT", 00:20:30.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.439 "hdgst": ${hdgst:-false}, 00:20:30.439 "ddgst": ${ddgst:-false} 00:20:30.439 }, 00:20:30.439 "method": "bdev_nvme_attach_controller" 00:20:30.439 } 00:20:30.439 EOF 00:20:30.439 )") 00:20:30.439 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:20:30.439 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:30.439 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:30.439 { 00:20:30.439 "params": { 00:20:30.439 "name": "Nvme$subsystem", 00:20:30.439 "trtype": "$TEST_TRANSPORT", 00:20:30.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.439 "adrfam": "ipv4", 00:20:30.439 "trsvcid": "$NVMF_PORT", 00:20:30.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.439 "hdgst": ${hdgst:-false}, 00:20:30.439 "ddgst": ${ddgst:-false} 00:20:30.439 }, 00:20:30.439 "method": "bdev_nvme_attach_controller" 00:20:30.439 } 00:20:30.439 EOF 00:20:30.439 )") 00:20:30.439 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:20:30.439 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:30.439 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:30.439 { 00:20:30.439 "params": { 00:20:30.439 "name": "Nvme$subsystem", 00:20:30.439 "trtype": "$TEST_TRANSPORT", 00:20:30.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.439 "adrfam": "ipv4", 00:20:30.439 "trsvcid": "$NVMF_PORT", 00:20:30.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.439 "hdgst": ${hdgst:-false}, 00:20:30.439 "ddgst": ${ddgst:-false} 00:20:30.439 }, 00:20:30.439 "method": "bdev_nvme_attach_controller" 00:20:30.439 } 00:20:30.439 EOF 00:20:30.439 )") 00:20:30.439 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:20:30.439 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:30.439 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:30.439 { 00:20:30.439 "params": { 00:20:30.439 "name": "Nvme$subsystem", 00:20:30.439 "trtype": "$TEST_TRANSPORT", 00:20:30.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.439 "adrfam": "ipv4", 00:20:30.439 "trsvcid": "$NVMF_PORT", 00:20:30.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.439 "hdgst": ${hdgst:-false}, 00:20:30.439 "ddgst": ${ddgst:-false} 00:20:30.439 }, 00:20:30.439 "method": "bdev_nvme_attach_controller" 00:20:30.439 } 00:20:30.439 EOF 00:20:30.439 )") 00:20:30.439 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:20:30.439 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:30.439 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:30.439 { 00:20:30.439 "params": { 00:20:30.439 "name": "Nvme$subsystem", 00:20:30.439 "trtype": "$TEST_TRANSPORT", 00:20:30.439 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.439 "adrfam": "ipv4", 00:20:30.439 "trsvcid": "$NVMF_PORT", 00:20:30.439 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.439 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.439 "hdgst": ${hdgst:-false}, 00:20:30.439 "ddgst": ${ddgst:-false} 00:20:30.439 }, 00:20:30.439 "method": "bdev_nvme_attach_controller" 00:20:30.439 } 00:20:30.439 EOF 00:20:30.439 )") 00:20:30.440 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:20:30.440 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:30.440 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:30.440 { 00:20:30.440 "params": { 00:20:30.440 "name": "Nvme$subsystem", 00:20:30.440 "trtype": "$TEST_TRANSPORT", 00:20:30.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.440 "adrfam": "ipv4", 00:20:30.440 "trsvcid": "$NVMF_PORT", 00:20:30.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.440 "hdgst": ${hdgst:-false}, 00:20:30.440 "ddgst": ${ddgst:-false} 00:20:30.440 }, 00:20:30.440 "method": "bdev_nvme_attach_controller" 00:20:30.440 } 00:20:30.440 EOF 00:20:30.440 )") 00:20:30.440 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:20:30.440 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:30.440 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:30.440 { 00:20:30.440 "params": { 00:20:30.440 "name": "Nvme$subsystem", 00:20:30.440 "trtype": "$TEST_TRANSPORT", 00:20:30.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.440 "adrfam": "ipv4", 00:20:30.440 "trsvcid": "$NVMF_PORT", 00:20:30.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.440 "hdgst": ${hdgst:-false}, 00:20:30.440 "ddgst": ${ddgst:-false} 00:20:30.440 }, 00:20:30.440 "method": "bdev_nvme_attach_controller" 00:20:30.440 } 00:20:30.440 EOF 00:20:30.440 )") 00:20:30.440 [2024-11-20 09:04:46.412349] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:20:30.440 [2024-11-20 09:04:46.412397] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:20:30.440 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:20:30.440 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:30.440 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:30.440 { 00:20:30.440 "params": { 00:20:30.440 "name": "Nvme$subsystem", 00:20:30.440 "trtype": "$TEST_TRANSPORT", 00:20:30.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.440 "adrfam": "ipv4", 00:20:30.440 "trsvcid": "$NVMF_PORT", 00:20:30.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.440 "hdgst": ${hdgst:-false}, 00:20:30.440 "ddgst": ${ddgst:-false} 00:20:30.440 }, 00:20:30.440 "method": "bdev_nvme_attach_controller" 00:20:30.440 } 00:20:30.440 EOF 00:20:30.440 )") 00:20:30.440 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:20:30.440 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:30.440 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:30.440 { 00:20:30.440 "params": { 00:20:30.440 "name": "Nvme$subsystem", 00:20:30.440 "trtype": "$TEST_TRANSPORT", 00:20:30.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.440 "adrfam": "ipv4", 00:20:30.440 "trsvcid": "$NVMF_PORT", 00:20:30.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.440 "hdgst": ${hdgst:-false}, 00:20:30.440 "ddgst": ${ddgst:-false} 00:20:30.440 }, 00:20:30.440 "method": "bdev_nvme_attach_controller" 00:20:30.440 } 00:20:30.440 EOF 00:20:30.440 )") 00:20:30.440 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:20:30.440 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:30.440 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:30.440 { 00:20:30.440 "params": { 00:20:30.440 "name": "Nvme$subsystem", 00:20:30.440 "trtype": "$TEST_TRANSPORT", 00:20:30.440 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:30.440 "adrfam": "ipv4", 00:20:30.440 "trsvcid": "$NVMF_PORT", 00:20:30.440 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:30.440 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:30.440 "hdgst": ${hdgst:-false}, 00:20:30.440 "ddgst": ${ddgst:-false} 00:20:30.440 }, 00:20:30.440 "method": "bdev_nvme_attach_controller" 00:20:30.440 } 00:20:30.440 EOF 00:20:30.440 )") 00:20:30.440 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:20:30.440 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # jq . 00:20:30.440 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@397 -- # IFS=, 00:20:30.440 09:04:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:20:30.440 "params": { 00:20:30.440 "name": "Nvme1", 00:20:30.440 "trtype": "tcp", 00:20:30.440 "traddr": "10.0.0.2", 00:20:30.440 "adrfam": "ipv4", 00:20:30.440 "trsvcid": "4420", 00:20:30.440 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:30.440 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:30.440 "hdgst": false, 00:20:30.440 "ddgst": false 00:20:30.440 }, 00:20:30.440 "method": "bdev_nvme_attach_controller" 00:20:30.440 },{ 00:20:30.440 "params": { 00:20:30.440 "name": "Nvme2", 00:20:30.440 "trtype": "tcp", 00:20:30.440 "traddr": "10.0.0.2", 00:20:30.440 "adrfam": "ipv4", 00:20:30.440 "trsvcid": "4420", 00:20:30.440 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:30.440 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:30.440 "hdgst": false, 00:20:30.440 "ddgst": false 00:20:30.440 }, 00:20:30.440 "method": "bdev_nvme_attach_controller" 00:20:30.440 },{ 00:20:30.440 "params": { 00:20:30.440 "name": "Nvme3", 00:20:30.440 "trtype": "tcp", 00:20:30.440 "traddr": "10.0.0.2", 00:20:30.440 "adrfam": "ipv4", 00:20:30.440 "trsvcid": "4420", 00:20:30.440 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:30.440 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:30.440 "hdgst": false, 00:20:30.440 "ddgst": false 00:20:30.440 }, 00:20:30.440 "method": "bdev_nvme_attach_controller" 00:20:30.440 },{ 00:20:30.440 "params": { 00:20:30.440 "name": "Nvme4", 00:20:30.440 "trtype": "tcp", 00:20:30.440 "traddr": "10.0.0.2", 00:20:30.440 "adrfam": "ipv4", 00:20:30.440 "trsvcid": "4420", 00:20:30.440 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:30.440 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:30.440 "hdgst": false, 00:20:30.441 "ddgst": false 00:20:30.441 }, 00:20:30.441 "method": "bdev_nvme_attach_controller" 00:20:30.441 },{ 00:20:30.441 "params": { 00:20:30.441 "name": "Nvme5", 00:20:30.441 "trtype": "tcp", 00:20:30.441 "traddr": "10.0.0.2", 00:20:30.441 "adrfam": "ipv4", 00:20:30.441 "trsvcid": "4420", 00:20:30.441 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:30.441 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:30.441 "hdgst": false, 00:20:30.441 "ddgst": false 00:20:30.441 }, 00:20:30.441 "method": "bdev_nvme_attach_controller" 00:20:30.441 },{ 00:20:30.441 "params": { 00:20:30.441 "name": "Nvme6", 00:20:30.441 "trtype": "tcp", 00:20:30.441 "traddr": "10.0.0.2", 00:20:30.441 "adrfam": "ipv4", 00:20:30.441 "trsvcid": "4420", 00:20:30.441 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:30.441 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:30.441 "hdgst": false, 00:20:30.441 "ddgst": false 00:20:30.441 }, 00:20:30.441 "method": "bdev_nvme_attach_controller" 00:20:30.441 },{ 00:20:30.441 "params": { 00:20:30.441 "name": "Nvme7", 00:20:30.441 "trtype": "tcp", 00:20:30.441 "traddr": "10.0.0.2", 00:20:30.441 "adrfam": "ipv4", 00:20:30.441 "trsvcid": "4420", 00:20:30.441 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:30.441 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:30.441 "hdgst": false, 00:20:30.441 "ddgst": false 00:20:30.441 }, 00:20:30.441 "method": "bdev_nvme_attach_controller" 00:20:30.441 },{ 00:20:30.441 "params": { 00:20:30.441 "name": "Nvme8", 00:20:30.441 "trtype": "tcp", 00:20:30.441 "traddr": "10.0.0.2", 00:20:30.441 "adrfam": "ipv4", 00:20:30.441 "trsvcid": "4420", 00:20:30.441 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:30.441 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:30.441 "hdgst": false, 00:20:30.441 "ddgst": false 00:20:30.441 }, 00:20:30.441 "method": "bdev_nvme_attach_controller" 00:20:30.441 },{ 00:20:30.441 "params": { 00:20:30.441 "name": "Nvme9", 00:20:30.441 "trtype": "tcp", 00:20:30.441 "traddr": "10.0.0.2", 00:20:30.441 "adrfam": "ipv4", 00:20:30.441 "trsvcid": "4420", 00:20:30.441 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:30.441 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:30.441 "hdgst": false, 00:20:30.441 "ddgst": false 00:20:30.441 }, 00:20:30.441 "method": "bdev_nvme_attach_controller" 00:20:30.441 },{ 00:20:30.441 "params": { 00:20:30.441 "name": "Nvme10", 00:20:30.441 "trtype": "tcp", 00:20:30.441 "traddr": "10.0.0.2", 00:20:30.441 "adrfam": "ipv4", 00:20:30.441 "trsvcid": "4420", 00:20:30.441 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:30.441 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:30.441 "hdgst": false, 00:20:30.441 "ddgst": false 00:20:30.441 }, 00:20:30.441 "method": "bdev_nvme_attach_controller" 00:20:30.441 }' 00:20:30.700 [2024-11-20 09:04:46.487850] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:30.700 [2024-11-20 09:04:46.529266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.602 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:32.602 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:20:32.602 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:32.602 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:32.602 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:32.602 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:32.602 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 2392020 00:20:32.602 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:20:32.602 09:04:48 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:20:33.539 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 2392020 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:20:33.539 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 2391743 00:20:33.539 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:20:33.539 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:33.539 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # config=() 00:20:33.539 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # local subsystem config 00:20:33.539 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:33.539 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:33.539 { 00:20:33.539 "params": { 00:20:33.539 "name": "Nvme$subsystem", 00:20:33.539 "trtype": "$TEST_TRANSPORT", 00:20:33.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.539 "adrfam": "ipv4", 00:20:33.539 "trsvcid": "$NVMF_PORT", 00:20:33.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.539 "hdgst": ${hdgst:-false}, 00:20:33.539 "ddgst": ${ddgst:-false} 00:20:33.539 }, 00:20:33.539 "method": "bdev_nvme_attach_controller" 00:20:33.539 } 00:20:33.539 EOF 00:20:33.539 )") 00:20:33.539 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:20:33.539 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:33.539 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:33.539 { 00:20:33.539 "params": { 00:20:33.539 "name": "Nvme$subsystem", 00:20:33.539 "trtype": "$TEST_TRANSPORT", 00:20:33.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.539 "adrfam": "ipv4", 00:20:33.539 "trsvcid": "$NVMF_PORT", 00:20:33.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.539 "hdgst": ${hdgst:-false}, 00:20:33.539 "ddgst": ${ddgst:-false} 00:20:33.539 }, 00:20:33.539 "method": "bdev_nvme_attach_controller" 00:20:33.539 } 00:20:33.539 EOF 00:20:33.539 )") 00:20:33.539 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:20:33.539 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:33.539 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:33.539 { 00:20:33.539 "params": { 00:20:33.539 "name": "Nvme$subsystem", 00:20:33.539 "trtype": "$TEST_TRANSPORT", 00:20:33.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.539 "adrfam": "ipv4", 00:20:33.539 "trsvcid": "$NVMF_PORT", 00:20:33.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.539 "hdgst": ${hdgst:-false}, 00:20:33.539 "ddgst": ${ddgst:-false} 00:20:33.539 }, 00:20:33.539 "method": "bdev_nvme_attach_controller" 00:20:33.539 } 00:20:33.539 EOF 00:20:33.539 )") 00:20:33.539 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:20:33.539 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:33.539 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:33.539 { 00:20:33.539 "params": { 00:20:33.539 "name": "Nvme$subsystem", 00:20:33.539 "trtype": "$TEST_TRANSPORT", 00:20:33.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.539 "adrfam": "ipv4", 00:20:33.539 "trsvcid": "$NVMF_PORT", 00:20:33.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.539 "hdgst": ${hdgst:-false}, 00:20:33.539 "ddgst": ${ddgst:-false} 00:20:33.539 }, 00:20:33.539 "method": "bdev_nvme_attach_controller" 00:20:33.539 } 00:20:33.539 EOF 00:20:33.539 )") 00:20:33.539 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:20:33.539 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:33.539 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:33.539 { 00:20:33.539 "params": { 00:20:33.539 "name": "Nvme$subsystem", 00:20:33.539 "trtype": "$TEST_TRANSPORT", 00:20:33.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.539 "adrfam": "ipv4", 00:20:33.539 "trsvcid": "$NVMF_PORT", 00:20:33.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.539 "hdgst": ${hdgst:-false}, 00:20:33.539 "ddgst": ${ddgst:-false} 00:20:33.539 }, 00:20:33.539 "method": "bdev_nvme_attach_controller" 00:20:33.539 } 00:20:33.539 EOF 00:20:33.539 )") 00:20:33.539 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:20:33.539 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:33.539 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:33.539 { 00:20:33.539 "params": { 00:20:33.539 "name": "Nvme$subsystem", 00:20:33.539 "trtype": "$TEST_TRANSPORT", 00:20:33.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.539 "adrfam": "ipv4", 00:20:33.539 "trsvcid": "$NVMF_PORT", 00:20:33.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.539 "hdgst": ${hdgst:-false}, 00:20:33.539 "ddgst": ${ddgst:-false} 00:20:33.539 }, 00:20:33.539 "method": "bdev_nvme_attach_controller" 00:20:33.539 } 00:20:33.539 EOF 00:20:33.539 )") 00:20:33.539 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:20:33.539 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:33.539 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:33.539 { 00:20:33.539 "params": { 00:20:33.539 "name": "Nvme$subsystem", 00:20:33.539 "trtype": "$TEST_TRANSPORT", 00:20:33.539 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.539 "adrfam": "ipv4", 00:20:33.539 "trsvcid": "$NVMF_PORT", 00:20:33.539 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.539 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.540 "hdgst": ${hdgst:-false}, 00:20:33.540 "ddgst": ${ddgst:-false} 00:20:33.540 }, 00:20:33.540 "method": "bdev_nvme_attach_controller" 00:20:33.540 } 00:20:33.540 EOF 00:20:33.540 )") 00:20:33.540 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:20:33.540 [2024-11-20 09:04:49.347563] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:20:33.540 [2024-11-20 09:04:49.347617] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2392509 ] 00:20:33.540 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:33.540 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:33.540 { 00:20:33.540 "params": { 00:20:33.540 "name": "Nvme$subsystem", 00:20:33.540 "trtype": "$TEST_TRANSPORT", 00:20:33.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.540 "adrfam": "ipv4", 00:20:33.540 "trsvcid": "$NVMF_PORT", 00:20:33.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.540 "hdgst": ${hdgst:-false}, 00:20:33.540 "ddgst": ${ddgst:-false} 00:20:33.540 }, 00:20:33.540 "method": "bdev_nvme_attach_controller" 00:20:33.540 } 00:20:33.540 EOF 00:20:33.540 )") 00:20:33.540 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:20:33.540 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:33.540 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:33.540 { 00:20:33.540 "params": { 00:20:33.540 "name": "Nvme$subsystem", 00:20:33.540 "trtype": "$TEST_TRANSPORT", 00:20:33.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.540 "adrfam": "ipv4", 00:20:33.540 "trsvcid": "$NVMF_PORT", 00:20:33.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.540 "hdgst": ${hdgst:-false}, 00:20:33.540 "ddgst": ${ddgst:-false} 00:20:33.540 }, 00:20:33.540 "method": "bdev_nvme_attach_controller" 00:20:33.540 } 00:20:33.540 EOF 00:20:33.540 )") 00:20:33.540 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:20:33.540 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:33.540 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:33.540 { 00:20:33.540 "params": { 00:20:33.540 "name": "Nvme$subsystem", 00:20:33.540 "trtype": "$TEST_TRANSPORT", 00:20:33.540 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:33.540 "adrfam": "ipv4", 00:20:33.540 "trsvcid": "$NVMF_PORT", 00:20:33.540 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:33.540 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:33.540 "hdgst": ${hdgst:-false}, 00:20:33.540 "ddgst": ${ddgst:-false} 00:20:33.540 }, 00:20:33.540 "method": "bdev_nvme_attach_controller" 00:20:33.540 } 00:20:33.540 EOF 00:20:33.540 )") 00:20:33.540 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:20:33.540 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # jq . 00:20:33.540 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@397 -- # IFS=, 00:20:33.540 09:04:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:20:33.540 "params": { 00:20:33.540 "name": "Nvme1", 00:20:33.540 "trtype": "tcp", 00:20:33.540 "traddr": "10.0.0.2", 00:20:33.540 "adrfam": "ipv4", 00:20:33.540 "trsvcid": "4420", 00:20:33.540 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:33.540 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:33.540 "hdgst": false, 00:20:33.540 "ddgst": false 00:20:33.540 }, 00:20:33.540 "method": "bdev_nvme_attach_controller" 00:20:33.540 },{ 00:20:33.540 "params": { 00:20:33.540 "name": "Nvme2", 00:20:33.540 "trtype": "tcp", 00:20:33.540 "traddr": "10.0.0.2", 00:20:33.540 "adrfam": "ipv4", 00:20:33.540 "trsvcid": "4420", 00:20:33.540 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:33.540 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:33.540 "hdgst": false, 00:20:33.540 "ddgst": false 00:20:33.540 }, 00:20:33.540 "method": "bdev_nvme_attach_controller" 00:20:33.540 },{ 00:20:33.540 "params": { 00:20:33.540 "name": "Nvme3", 00:20:33.540 "trtype": "tcp", 00:20:33.540 "traddr": "10.0.0.2", 00:20:33.540 "adrfam": "ipv4", 00:20:33.540 "trsvcid": "4420", 00:20:33.540 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:33.540 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:33.540 "hdgst": false, 00:20:33.540 "ddgst": false 00:20:33.540 }, 00:20:33.540 "method": "bdev_nvme_attach_controller" 00:20:33.540 },{ 00:20:33.540 "params": { 00:20:33.540 "name": "Nvme4", 00:20:33.540 "trtype": "tcp", 00:20:33.540 "traddr": "10.0.0.2", 00:20:33.540 "adrfam": "ipv4", 00:20:33.540 "trsvcid": "4420", 00:20:33.540 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:33.540 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:33.540 "hdgst": false, 00:20:33.540 "ddgst": false 00:20:33.540 }, 00:20:33.540 "method": "bdev_nvme_attach_controller" 00:20:33.540 },{ 00:20:33.540 "params": { 00:20:33.540 "name": "Nvme5", 00:20:33.540 "trtype": "tcp", 00:20:33.540 "traddr": "10.0.0.2", 00:20:33.540 "adrfam": "ipv4", 00:20:33.540 "trsvcid": "4420", 00:20:33.540 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:33.540 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:33.540 "hdgst": false, 00:20:33.540 "ddgst": false 00:20:33.540 }, 00:20:33.540 "method": "bdev_nvme_attach_controller" 00:20:33.540 },{ 00:20:33.540 "params": { 00:20:33.540 "name": "Nvme6", 00:20:33.540 "trtype": "tcp", 00:20:33.540 "traddr": "10.0.0.2", 00:20:33.540 "adrfam": "ipv4", 00:20:33.540 "trsvcid": "4420", 00:20:33.540 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:33.540 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:33.540 "hdgst": false, 00:20:33.540 "ddgst": false 00:20:33.540 }, 00:20:33.540 "method": "bdev_nvme_attach_controller" 00:20:33.540 },{ 00:20:33.540 "params": { 00:20:33.540 "name": "Nvme7", 00:20:33.540 "trtype": "tcp", 00:20:33.540 "traddr": "10.0.0.2", 00:20:33.540 "adrfam": "ipv4", 00:20:33.540 "trsvcid": "4420", 00:20:33.540 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:33.540 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:33.540 "hdgst": false, 00:20:33.540 "ddgst": false 00:20:33.540 }, 00:20:33.540 "method": "bdev_nvme_attach_controller" 00:20:33.540 },{ 00:20:33.541 "params": { 00:20:33.541 "name": "Nvme8", 00:20:33.541 "trtype": "tcp", 00:20:33.541 "traddr": "10.0.0.2", 00:20:33.541 "adrfam": "ipv4", 00:20:33.541 "trsvcid": "4420", 00:20:33.541 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:33.541 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:33.541 "hdgst": false, 00:20:33.541 "ddgst": false 00:20:33.541 }, 00:20:33.541 "method": "bdev_nvme_attach_controller" 00:20:33.541 },{ 00:20:33.541 "params": { 00:20:33.541 "name": "Nvme9", 00:20:33.541 "trtype": "tcp", 00:20:33.541 "traddr": "10.0.0.2", 00:20:33.541 "adrfam": "ipv4", 00:20:33.541 "trsvcid": "4420", 00:20:33.541 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:33.541 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:33.541 "hdgst": false, 00:20:33.541 "ddgst": false 00:20:33.541 }, 00:20:33.541 "method": "bdev_nvme_attach_controller" 00:20:33.541 },{ 00:20:33.541 "params": { 00:20:33.541 "name": "Nvme10", 00:20:33.541 "trtype": "tcp", 00:20:33.541 "traddr": "10.0.0.2", 00:20:33.541 "adrfam": "ipv4", 00:20:33.541 "trsvcid": "4420", 00:20:33.541 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:33.541 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:33.541 "hdgst": false, 00:20:33.541 "ddgst": false 00:20:33.541 }, 00:20:33.541 "method": "bdev_nvme_attach_controller" 00:20:33.541 }' 00:20:33.541 [2024-11-20 09:04:49.425643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:33.541 [2024-11-20 09:04:49.467818] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:34.918 Running I/O for 1 seconds... 00:20:35.855 2193.00 IOPS, 137.06 MiB/s 00:20:35.855 Latency(us) 00:20:35.855 [2024-11-20T08:04:51.896Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:35.855 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:35.855 Verification LBA range: start 0x0 length 0x400 00:20:35.855 Nvme1n1 : 1.15 282.43 17.65 0.00 0.00 223463.43 6325.65 215186.03 00:20:35.855 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:35.855 Verification LBA range: start 0x0 length 0x400 00:20:35.855 Nvme2n1 : 1.06 241.12 15.07 0.00 0.00 258574.91 17324.30 235245.75 00:20:35.855 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:35.855 Verification LBA range: start 0x0 length 0x400 00:20:35.855 Nvme3n1 : 1.15 286.41 17.90 0.00 0.00 213829.06 5812.76 222480.47 00:20:35.855 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:35.855 Verification LBA range: start 0x0 length 0x400 00:20:35.855 Nvme4n1 : 1.11 289.72 18.11 0.00 0.00 205003.12 13335.15 221568.67 00:20:35.855 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:35.855 Verification LBA range: start 0x0 length 0x400 00:20:35.855 Nvme5n1 : 1.07 239.16 14.95 0.00 0.00 248817.75 17324.30 222480.47 00:20:35.855 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:35.855 Verification LBA range: start 0x0 length 0x400 00:20:35.855 Nvme6n1 : 1.16 274.80 17.17 0.00 0.00 213988.66 12936.24 223392.28 00:20:35.855 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:35.855 Verification LBA range: start 0x0 length 0x400 00:20:35.855 Nvme7n1 : 1.16 278.93 17.43 0.00 0.00 207943.78 2578.70 219745.06 00:20:35.855 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:35.855 Verification LBA range: start 0x0 length 0x400 00:20:35.855 Nvme8n1 : 1.16 276.95 17.31 0.00 0.00 206430.83 13278.16 225215.89 00:20:35.855 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:35.855 Verification LBA range: start 0x0 length 0x400 00:20:35.855 Nvme9n1 : 1.17 273.97 17.12 0.00 0.00 205774.85 16412.49 217921.45 00:20:35.855 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:35.855 Verification LBA range: start 0x0 length 0x400 00:20:35.855 Nvme10n1 : 1.17 273.28 17.08 0.00 0.00 203051.19 11454.55 233422.14 00:20:35.855 [2024-11-20T08:04:51.896Z] =================================================================================================================== 00:20:35.855 [2024-11-20T08:04:51.896Z] Total : 2716.76 169.80 0.00 0.00 217205.39 2578.70 235245.75 00:20:36.114 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:20:36.114 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:36.114 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:36.114 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:36.114 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:36.114 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # nvmfcleanup 00:20:36.114 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@99 -- # sync 00:20:36.114 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:20:36.114 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # set +e 00:20:36.114 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # for i in {1..20} 00:20:36.114 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:20:36.114 rmmod nvme_tcp 00:20:36.114 rmmod nvme_fabrics 00:20:36.114 rmmod nvme_keyring 00:20:36.114 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:20:36.114 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # set -e 00:20:36.114 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # return 0 00:20:36.114 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # '[' -n 2391743 ']' 00:20:36.114 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@337 -- # killprocess 2391743 00:20:36.114 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 2391743 ']' 00:20:36.114 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 2391743 00:20:36.114 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:20:36.114 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:36.114 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2391743 00:20:36.114 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:36.114 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:36.114 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2391743' 00:20:36.114 killing process with pid 2391743 00:20:36.114 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 2391743 00:20:36.114 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 2391743 00:20:36.682 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:20:36.682 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # nvmf_fini 00:20:36.682 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@264 -- # local dev 00:20:36.682 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@267 -- # remove_target_ns 00:20:36.682 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:36.682 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:36.682 09:04:52 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@268 -- # delete_main_bridge 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@130 -- # return 0 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@41 -- # _dev=0 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@41 -- # dev_map=() 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@284 -- # iptr 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@542 -- # iptables-save 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@542 -- # iptables-restore 00:20:38.717 00:20:38.717 real 0m15.814s 00:20:38.717 user 0m35.583s 00:20:38.717 sys 0m5.887s 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:20:38.717 ************************************ 00:20:38.717 END TEST nvmf_shutdown_tc1 00:20:38.717 ************************************ 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:38.717 ************************************ 00:20:38.717 START TEST nvmf_shutdown_tc2 00:20:38.717 ************************************ 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:38.717 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # prepare_net_devs 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # remove_target_ns 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # xtrace_disable 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@131 -- # pci_devs=() 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@131 -- # local -a pci_devs 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@133 -- # pci_drivers=() 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@135 -- # net_devs=() 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@135 -- # local -ga net_devs 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@136 -- # e810=() 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@136 -- # local -ga e810 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@137 -- # x722=() 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@137 -- # local -ga x722 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@138 -- # mlx=() 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@138 -- # local -ga mlx 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:38.718 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:38.718 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:38.718 Found net devices under 0000:86:00.0: cvl_0_0 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:38.718 Found net devices under 0000:86:00.1: cvl_0_1 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # is_hw=yes 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@257 -- # create_target_ns 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:38.718 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:38.719 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:20:38.719 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:20:38.719 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:20:38.719 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:20:38.719 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:20:38.719 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@28 -- # local -g _dev 00:20:38.719 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:20:38.719 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:20:38.719 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:38.719 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:20:38.719 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@44 -- # ips=() 00:20:38.719 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:20:38.719 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:20:38.719 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:20:38.719 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:20:38.719 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:20:38.719 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:20:38.719 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:20:38.719 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:20:38.719 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:20:38.719 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:20:38.719 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:20:38.719 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:20:38.719 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:20:38.719 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:20:38.719 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:20:38.719 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@11 -- # local val=167772161 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:20:38.980 10.0.0.1 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@11 -- # local val=167772162 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:20:38.980 10.0.0.2 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@38 -- # ping_ips 1 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:20:38.980 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:20:38.981 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:38.981 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.455 ms 00:20:38.981 00:20:38.981 --- 10.0.0.1 ping statistics --- 00:20:38.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.981 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@107 -- # local dev=target0 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:20:38.981 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:38.981 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:20:38.981 00:20:38.981 --- 10.0.0.2 ping statistics --- 00:20:38.981 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:38.981 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # (( pair++ )) 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # return 0 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@107 -- # local dev=initiator1 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # return 1 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # dev= 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@169 -- # return 0 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@107 -- # local dev=target0 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:38.981 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:20:38.982 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:20:38.982 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:38.982 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:38.982 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:38.982 09:04:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:38.982 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # get_net_dev target1 00:20:38.982 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@107 -- # local dev=target1 00:20:38.982 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:20:38.982 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:20:38.982 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # return 1 00:20:38.982 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # dev= 00:20:38.982 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@169 -- # return 0 00:20:38.982 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:20:38.982 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:38.982 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:20:38.982 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:20:38.982 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:38.982 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:20:38.982 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:20:39.241 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:39.241 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:39.241 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:39.241 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:39.241 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # nvmfpid=2393564 00:20:39.241 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # waitforlisten 2393564 00:20:39.241 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:39.241 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2393564 ']' 00:20:39.241 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:39.241 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:39.241 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:39.241 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:39.241 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:39.241 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:39.241 [2024-11-20 09:04:55.106259] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:20:39.241 [2024-11-20 09:04:55.106309] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:39.241 [2024-11-20 09:04:55.184152] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:39.241 [2024-11-20 09:04:55.224869] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:39.241 [2024-11-20 09:04:55.224909] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:39.241 [2024-11-20 09:04:55.224917] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:39.241 [2024-11-20 09:04:55.224923] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:39.241 [2024-11-20 09:04:55.224928] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:39.241 [2024-11-20 09:04:55.226413] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:39.241 [2024-11-20 09:04:55.226523] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:39.241 [2024-11-20 09:04:55.226630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:39.241 [2024-11-20 09:04:55.226632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:40.178 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:40.178 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:40.178 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:40.178 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:40.178 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.178 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:40.178 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:40.178 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.178 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.178 [2024-11-20 09:04:55.977951] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:40.178 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.178 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:40.178 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:40.178 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:40.178 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.178 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:40.178 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.178 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:40.178 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.178 09:04:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:40.178 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.178 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:40.178 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.178 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:40.178 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.178 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:40.178 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.178 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:40.178 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.178 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:40.179 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.179 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:40.179 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.179 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:40.179 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:40.179 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:20:40.179 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:40.179 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:40.179 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.179 Malloc1 00:20:40.179 [2024-11-20 09:04:56.086056] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:40.179 Malloc2 00:20:40.179 Malloc3 00:20:40.179 Malloc4 00:20:40.438 Malloc5 00:20:40.438 Malloc6 00:20:40.438 Malloc7 00:20:40.438 Malloc8 00:20:40.438 Malloc9 00:20:40.438 Malloc10 00:20:40.438 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:40.438 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:40.438 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:40.438 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.698 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=2393843 00:20:40.698 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 2393843 /var/tmp/bdevperf.sock 00:20:40.698 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2393843 ']' 00:20:40.698 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:40.698 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:40.698 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:40.698 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:40.698 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:40.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:40.698 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # config=() 00:20:40.698 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:40.698 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # local subsystem config 00:20:40.698 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:40.698 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:40.698 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:40.698 { 00:20:40.698 "params": { 00:20:40.698 "name": "Nvme$subsystem", 00:20:40.698 "trtype": "$TEST_TRANSPORT", 00:20:40.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.698 "adrfam": "ipv4", 00:20:40.698 "trsvcid": "$NVMF_PORT", 00:20:40.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.698 "hdgst": ${hdgst:-false}, 00:20:40.698 "ddgst": ${ddgst:-false} 00:20:40.698 }, 00:20:40.698 "method": "bdev_nvme_attach_controller" 00:20:40.698 } 00:20:40.698 EOF 00:20:40.698 )") 00:20:40.698 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:20:40.698 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:40.698 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:40.698 { 00:20:40.698 "params": { 00:20:40.698 "name": "Nvme$subsystem", 00:20:40.698 "trtype": "$TEST_TRANSPORT", 00:20:40.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.698 "adrfam": "ipv4", 00:20:40.698 "trsvcid": "$NVMF_PORT", 00:20:40.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.698 "hdgst": ${hdgst:-false}, 00:20:40.698 "ddgst": ${ddgst:-false} 00:20:40.698 }, 00:20:40.698 "method": "bdev_nvme_attach_controller" 00:20:40.698 } 00:20:40.698 EOF 00:20:40.698 )") 00:20:40.698 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:20:40.698 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:40.698 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:40.698 { 00:20:40.698 "params": { 00:20:40.698 "name": "Nvme$subsystem", 00:20:40.698 "trtype": "$TEST_TRANSPORT", 00:20:40.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.698 "adrfam": "ipv4", 00:20:40.698 "trsvcid": "$NVMF_PORT", 00:20:40.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.698 "hdgst": ${hdgst:-false}, 00:20:40.698 "ddgst": ${ddgst:-false} 00:20:40.698 }, 00:20:40.698 "method": "bdev_nvme_attach_controller" 00:20:40.698 } 00:20:40.698 EOF 00:20:40.698 )") 00:20:40.698 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:20:40.698 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:40.698 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:40.698 { 00:20:40.698 "params": { 00:20:40.698 "name": "Nvme$subsystem", 00:20:40.698 "trtype": "$TEST_TRANSPORT", 00:20:40.698 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.698 "adrfam": "ipv4", 00:20:40.698 "trsvcid": "$NVMF_PORT", 00:20:40.698 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.698 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.698 "hdgst": ${hdgst:-false}, 00:20:40.698 "ddgst": ${ddgst:-false} 00:20:40.698 }, 00:20:40.698 "method": "bdev_nvme_attach_controller" 00:20:40.698 } 00:20:40.698 EOF 00:20:40.698 )") 00:20:40.698 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:20:40.698 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:40.699 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:40.699 { 00:20:40.699 "params": { 00:20:40.699 "name": "Nvme$subsystem", 00:20:40.699 "trtype": "$TEST_TRANSPORT", 00:20:40.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.699 "adrfam": "ipv4", 00:20:40.699 "trsvcid": "$NVMF_PORT", 00:20:40.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.699 "hdgst": ${hdgst:-false}, 00:20:40.699 "ddgst": ${ddgst:-false} 00:20:40.699 }, 00:20:40.699 "method": "bdev_nvme_attach_controller" 00:20:40.699 } 00:20:40.699 EOF 00:20:40.699 )") 00:20:40.699 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:20:40.699 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:40.699 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:40.699 { 00:20:40.699 "params": { 00:20:40.699 "name": "Nvme$subsystem", 00:20:40.699 "trtype": "$TEST_TRANSPORT", 00:20:40.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.699 "adrfam": "ipv4", 00:20:40.699 "trsvcid": "$NVMF_PORT", 00:20:40.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.699 "hdgst": ${hdgst:-false}, 00:20:40.699 "ddgst": ${ddgst:-false} 00:20:40.699 }, 00:20:40.699 "method": "bdev_nvme_attach_controller" 00:20:40.699 } 00:20:40.699 EOF 00:20:40.699 )") 00:20:40.699 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:20:40.699 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:40.699 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:40.699 { 00:20:40.699 "params": { 00:20:40.699 "name": "Nvme$subsystem", 00:20:40.699 "trtype": "$TEST_TRANSPORT", 00:20:40.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.699 "adrfam": "ipv4", 00:20:40.699 "trsvcid": "$NVMF_PORT", 00:20:40.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.699 "hdgst": ${hdgst:-false}, 00:20:40.699 "ddgst": ${ddgst:-false} 00:20:40.699 }, 00:20:40.699 "method": "bdev_nvme_attach_controller" 00:20:40.699 } 00:20:40.699 EOF 00:20:40.699 )") 00:20:40.699 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:20:40.699 [2024-11-20 09:04:56.557690] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:20:40.699 [2024-11-20 09:04:56.557736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2393843 ] 00:20:40.699 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:40.699 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:40.699 { 00:20:40.699 "params": { 00:20:40.699 "name": "Nvme$subsystem", 00:20:40.699 "trtype": "$TEST_TRANSPORT", 00:20:40.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.699 "adrfam": "ipv4", 00:20:40.699 "trsvcid": "$NVMF_PORT", 00:20:40.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.699 "hdgst": ${hdgst:-false}, 00:20:40.699 "ddgst": ${ddgst:-false} 00:20:40.699 }, 00:20:40.699 "method": "bdev_nvme_attach_controller" 00:20:40.699 } 00:20:40.699 EOF 00:20:40.699 )") 00:20:40.699 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:20:40.699 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:40.699 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:40.699 { 00:20:40.699 "params": { 00:20:40.699 "name": "Nvme$subsystem", 00:20:40.699 "trtype": "$TEST_TRANSPORT", 00:20:40.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.699 "adrfam": "ipv4", 00:20:40.699 "trsvcid": "$NVMF_PORT", 00:20:40.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.699 "hdgst": ${hdgst:-false}, 00:20:40.699 "ddgst": ${ddgst:-false} 00:20:40.699 }, 00:20:40.699 "method": "bdev_nvme_attach_controller" 00:20:40.699 } 00:20:40.699 EOF 00:20:40.699 )") 00:20:40.699 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:20:40.699 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:40.699 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:40.699 { 00:20:40.699 "params": { 00:20:40.699 "name": "Nvme$subsystem", 00:20:40.699 "trtype": "$TEST_TRANSPORT", 00:20:40.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:40.699 "adrfam": "ipv4", 00:20:40.699 "trsvcid": "$NVMF_PORT", 00:20:40.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:40.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:40.699 "hdgst": ${hdgst:-false}, 00:20:40.699 "ddgst": ${ddgst:-false} 00:20:40.699 }, 00:20:40.699 "method": "bdev_nvme_attach_controller" 00:20:40.699 } 00:20:40.699 EOF 00:20:40.699 )") 00:20:40.699 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:20:40.699 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # jq . 00:20:40.699 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@397 -- # IFS=, 00:20:40.699 09:04:56 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:20:40.699 "params": { 00:20:40.699 "name": "Nvme1", 00:20:40.699 "trtype": "tcp", 00:20:40.699 "traddr": "10.0.0.2", 00:20:40.699 "adrfam": "ipv4", 00:20:40.699 "trsvcid": "4420", 00:20:40.699 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:40.699 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:40.699 "hdgst": false, 00:20:40.699 "ddgst": false 00:20:40.699 }, 00:20:40.699 "method": "bdev_nvme_attach_controller" 00:20:40.699 },{ 00:20:40.699 "params": { 00:20:40.699 "name": "Nvme2", 00:20:40.699 "trtype": "tcp", 00:20:40.699 "traddr": "10.0.0.2", 00:20:40.699 "adrfam": "ipv4", 00:20:40.699 "trsvcid": "4420", 00:20:40.699 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:40.699 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:40.699 "hdgst": false, 00:20:40.699 "ddgst": false 00:20:40.699 }, 00:20:40.699 "method": "bdev_nvme_attach_controller" 00:20:40.699 },{ 00:20:40.699 "params": { 00:20:40.699 "name": "Nvme3", 00:20:40.699 "trtype": "tcp", 00:20:40.699 "traddr": "10.0.0.2", 00:20:40.699 "adrfam": "ipv4", 00:20:40.699 "trsvcid": "4420", 00:20:40.699 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:40.699 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:40.699 "hdgst": false, 00:20:40.699 "ddgst": false 00:20:40.699 }, 00:20:40.699 "method": "bdev_nvme_attach_controller" 00:20:40.699 },{ 00:20:40.699 "params": { 00:20:40.699 "name": "Nvme4", 00:20:40.699 "trtype": "tcp", 00:20:40.699 "traddr": "10.0.0.2", 00:20:40.699 "adrfam": "ipv4", 00:20:40.699 "trsvcid": "4420", 00:20:40.699 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:40.699 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:40.699 "hdgst": false, 00:20:40.699 "ddgst": false 00:20:40.699 }, 00:20:40.699 "method": "bdev_nvme_attach_controller" 00:20:40.699 },{ 00:20:40.699 "params": { 00:20:40.699 "name": "Nvme5", 00:20:40.699 "trtype": "tcp", 00:20:40.699 "traddr": "10.0.0.2", 00:20:40.699 "adrfam": "ipv4", 00:20:40.699 "trsvcid": "4420", 00:20:40.699 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:40.699 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:40.699 "hdgst": false, 00:20:40.699 "ddgst": false 00:20:40.699 }, 00:20:40.699 "method": "bdev_nvme_attach_controller" 00:20:40.699 },{ 00:20:40.699 "params": { 00:20:40.699 "name": "Nvme6", 00:20:40.699 "trtype": "tcp", 00:20:40.699 "traddr": "10.0.0.2", 00:20:40.699 "adrfam": "ipv4", 00:20:40.699 "trsvcid": "4420", 00:20:40.699 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:40.699 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:40.699 "hdgst": false, 00:20:40.699 "ddgst": false 00:20:40.699 }, 00:20:40.699 "method": "bdev_nvme_attach_controller" 00:20:40.699 },{ 00:20:40.699 "params": { 00:20:40.699 "name": "Nvme7", 00:20:40.699 "trtype": "tcp", 00:20:40.699 "traddr": "10.0.0.2", 00:20:40.699 "adrfam": "ipv4", 00:20:40.699 "trsvcid": "4420", 00:20:40.699 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:40.699 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:40.699 "hdgst": false, 00:20:40.699 "ddgst": false 00:20:40.699 }, 00:20:40.699 "method": "bdev_nvme_attach_controller" 00:20:40.699 },{ 00:20:40.699 "params": { 00:20:40.699 "name": "Nvme8", 00:20:40.699 "trtype": "tcp", 00:20:40.699 "traddr": "10.0.0.2", 00:20:40.699 "adrfam": "ipv4", 00:20:40.699 "trsvcid": "4420", 00:20:40.699 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:40.699 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:40.700 "hdgst": false, 00:20:40.700 "ddgst": false 00:20:40.700 }, 00:20:40.700 "method": "bdev_nvme_attach_controller" 00:20:40.700 },{ 00:20:40.700 "params": { 00:20:40.700 "name": "Nvme9", 00:20:40.700 "trtype": "tcp", 00:20:40.700 "traddr": "10.0.0.2", 00:20:40.700 "adrfam": "ipv4", 00:20:40.700 "trsvcid": "4420", 00:20:40.700 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:40.700 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:40.700 "hdgst": false, 00:20:40.700 "ddgst": false 00:20:40.700 }, 00:20:40.700 "method": "bdev_nvme_attach_controller" 00:20:40.700 },{ 00:20:40.700 "params": { 00:20:40.700 "name": "Nvme10", 00:20:40.700 "trtype": "tcp", 00:20:40.700 "traddr": "10.0.0.2", 00:20:40.700 "adrfam": "ipv4", 00:20:40.700 "trsvcid": "4420", 00:20:40.700 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:40.700 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:40.700 "hdgst": false, 00:20:40.700 "ddgst": false 00:20:40.700 }, 00:20:40.700 "method": "bdev_nvme_attach_controller" 00:20:40.700 }' 00:20:40.700 [2024-11-20 09:04:56.637477] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:40.700 [2024-11-20 09:04:56.679715] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:42.604 Running I/O for 10 seconds... 00:20:42.604 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:42.604 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:20:42.604 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:42.604 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.604 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:42.863 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.863 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:42.863 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:42.863 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:42.863 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:20:42.863 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:20:42.863 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:42.863 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:42.863 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:42.863 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:42.863 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:42.863 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:42.863 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:42.863 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=3 00:20:42.863 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:20:42.863 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:43.122 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:43.122 09:04:58 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:43.122 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:43.122 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:43.122 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.122 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:43.122 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.122 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=83 00:20:43.122 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 83 -ge 100 ']' 00:20:43.122 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:43.380 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:43.380 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:43.380 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:43.380 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:43.380 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.380 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:43.380 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.380 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=195 00:20:43.380 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 195 -ge 100 ']' 00:20:43.380 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:20:43.380 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:20:43.380 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:20:43.380 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 2393843 00:20:43.380 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2393843 ']' 00:20:43.380 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2393843 00:20:43.380 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:43.380 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:43.380 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2393843 00:20:43.380 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:43.380 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:43.380 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2393843' 00:20:43.380 killing process with pid 2393843 00:20:43.380 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2393843 00:20:43.380 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2393843 00:20:43.639 Received shutdown signal, test time was about 0.923119 seconds 00:20:43.639 00:20:43.639 Latency(us) 00:20:43.639 [2024-11-20T08:04:59.680Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:43.639 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.639 Verification LBA range: start 0x0 length 0x400 00:20:43.639 Nvme1n1 : 0.90 287.59 17.97 0.00 0.00 218629.63 4957.94 206067.98 00:20:43.639 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.639 Verification LBA range: start 0x0 length 0x400 00:20:43.639 Nvme2n1 : 0.91 281.61 17.60 0.00 0.00 220481.67 17438.27 205156.17 00:20:43.639 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.639 Verification LBA range: start 0x0 length 0x400 00:20:43.639 Nvme3n1 : 0.90 283.56 17.72 0.00 0.00 214964.98 14588.88 238892.97 00:20:43.639 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.639 Verification LBA range: start 0x0 length 0x400 00:20:43.639 Nvme4n1 : 0.90 285.56 17.85 0.00 0.00 209161.79 25986.45 205156.17 00:20:43.639 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.639 Verification LBA range: start 0x0 length 0x400 00:20:43.639 Nvme5n1 : 0.92 277.52 17.35 0.00 0.00 211886.30 16754.42 229774.91 00:20:43.639 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.639 Verification LBA range: start 0x0 length 0x400 00:20:43.639 Nvme6n1 : 0.92 279.30 17.46 0.00 0.00 206415.25 18578.03 242540.19 00:20:43.639 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.639 Verification LBA range: start 0x0 length 0x400 00:20:43.639 Nvme7n1 : 0.92 278.61 17.41 0.00 0.00 202462.16 13563.10 226127.69 00:20:43.639 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.639 Verification LBA range: start 0x0 length 0x400 00:20:43.639 Nvme8n1 : 0.91 284.82 17.80 0.00 0.00 194155.72 2778.16 199685.34 00:20:43.639 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.639 Verification LBA range: start 0x0 length 0x400 00:20:43.639 Nvme9n1 : 0.89 216.18 13.51 0.00 0.00 249939.70 31457.28 226127.69 00:20:43.639 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:43.639 Verification LBA range: start 0x0 length 0x400 00:20:43.639 Nvme10n1 : 0.89 215.26 13.45 0.00 0.00 246017.04 18008.15 244363.80 00:20:43.639 [2024-11-20T08:04:59.680Z] =================================================================================================================== 00:20:43.639 [2024-11-20T08:04:59.680Z] Total : 2690.00 168.13 0.00 0.00 215771.79 2778.16 244363.80 00:20:43.639 09:04:59 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:20:45.016 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 2393564 00:20:45.016 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:20:45.016 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:45.016 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:45.016 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:45.016 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:45.016 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # nvmfcleanup 00:20:45.016 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@99 -- # sync 00:20:45.016 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:20:45.016 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # set +e 00:20:45.016 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # for i in {1..20} 00:20:45.016 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:20:45.016 rmmod nvme_tcp 00:20:45.016 rmmod nvme_fabrics 00:20:45.016 rmmod nvme_keyring 00:20:45.016 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:20:45.016 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # set -e 00:20:45.016 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # return 0 00:20:45.016 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # '[' -n 2393564 ']' 00:20:45.016 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@337 -- # killprocess 2393564 00:20:45.016 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 2393564 ']' 00:20:45.016 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 2393564 00:20:45.016 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:20:45.016 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:45.016 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2393564 00:20:45.016 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:45.016 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:45.016 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2393564' 00:20:45.016 killing process with pid 2393564 00:20:45.016 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 2393564 00:20:45.016 09:05:00 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 2393564 00:20:45.276 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:20:45.276 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # nvmf_fini 00:20:45.276 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@264 -- # local dev 00:20:45.276 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@267 -- # remove_target_ns 00:20:45.276 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:45.276 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:45.276 09:05:01 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:47.182 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@268 -- # delete_main_bridge 00:20:47.182 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:47.182 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@130 -- # return 0 00:20:47.182 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:47.182 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:20:47.182 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:20:47.182 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:20:47.182 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:20:47.182 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:20:47.182 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:20:47.182 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:20:47.182 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:47.182 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:20:47.182 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:20:47.182 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:20:47.182 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:20:47.182 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:20:47.182 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:20:47.182 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:20:47.182 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:20:47.182 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@41 -- # _dev=0 00:20:47.182 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@41 -- # dev_map=() 00:20:47.182 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@284 -- # iptr 00:20:47.182 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@542 -- # iptables-save 00:20:47.182 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:20:47.182 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@542 -- # iptables-restore 00:20:47.443 00:20:47.443 real 0m8.601s 00:20:47.443 user 0m26.812s 00:20:47.443 sys 0m1.472s 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:20:47.443 ************************************ 00:20:47.443 END TEST nvmf_shutdown_tc2 00:20:47.443 ************************************ 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:47.443 ************************************ 00:20:47.443 START TEST nvmf_shutdown_tc3 00:20:47.443 ************************************ 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # prepare_net_devs 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # remove_target_ns 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # xtrace_disable 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@131 -- # pci_devs=() 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@131 -- # local -a pci_devs 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@133 -- # pci_drivers=() 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@135 -- # net_devs=() 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@135 -- # local -ga net_devs 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@136 -- # e810=() 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@136 -- # local -ga e810 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@137 -- # x722=() 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@137 -- # local -ga x722 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@138 -- # mlx=() 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@138 -- # local -ga mlx 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:47.443 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:47.443 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:47.443 Found net devices under 0000:86:00.0: cvl_0_0 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:47.443 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:47.444 Found net devices under 0000:86:00.1: cvl_0_1 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # is_hw=yes 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@257 -- # create_target_ns 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@28 -- # local -g _dev 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@44 -- # ips=() 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@11 -- # local val=167772161 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:20:47.444 10.0.0.1 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@11 -- # local val=167772162 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:20:47.444 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:20:47.705 10.0.0.2 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@38 -- # ping_ips 1 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:20:47.705 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:47.705 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.367 ms 00:20:47.705 00:20:47.705 --- 10.0.0.1 ping statistics --- 00:20:47.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.705 rtt min/avg/max/mdev = 0.367/0.367/0.367/0.000 ms 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@107 -- # local dev=target0 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:20:47.705 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:20:47.705 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:47.705 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.216 ms 00:20:47.705 00:20:47.705 --- 10.0.0.2 ping statistics --- 00:20:47.705 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:47.705 rtt min/avg/max/mdev = 0.216/0.216/0.216/0.000 ms 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # (( pair++ )) 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # return 0 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@107 -- # local dev=initiator1 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # return 1 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # dev= 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@169 -- # return 0 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@107 -- # local dev=target0 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # get_net_dev target1 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@107 -- # local dev=target1 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # return 1 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # dev= 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@169 -- # return 0 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # nvmfpid=2395136 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # waitforlisten 2395136 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2395136 ']' 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:47.706 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:47.706 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:47.966 09:05:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:47.966 [2024-11-20 09:05:03.797508] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:20:47.966 [2024-11-20 09:05:03.797553] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:47.966 [2024-11-20 09:05:03.878526] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:47.966 [2024-11-20 09:05:03.920499] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:47.966 [2024-11-20 09:05:03.920538] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:47.966 [2024-11-20 09:05:03.920546] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:47.966 [2024-11-20 09:05:03.920552] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:47.966 [2024-11-20 09:05:03.920557] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:47.966 [2024-11-20 09:05:03.922067] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:47.966 [2024-11-20 09:05:03.922176] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:47.966 [2024-11-20 09:05:03.922273] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:47.966 [2024-11-20 09:05:03.922274] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:48.902 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:48.902 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:48.902 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:48.902 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:48.902 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:48.902 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:48.902 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:48.902 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.902 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:48.903 [2024-11-20 09:05:04.676183] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:48.903 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:48.903 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:48.903 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:48.903 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:48.903 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:48.903 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:48.903 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.903 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:48.903 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.903 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:48.903 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.903 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:48.903 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.903 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:48.903 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.903 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:48.903 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.903 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:48.903 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.903 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:48.903 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.903 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:48.903 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.903 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:48.903 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:48.903 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:20:48.903 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:48.903 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:48.903 09:05:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:48.903 Malloc1 00:20:48.903 [2024-11-20 09:05:04.787430] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:48.903 Malloc2 00:20:48.903 Malloc3 00:20:48.903 Malloc4 00:20:48.903 Malloc5 00:20:49.163 Malloc6 00:20:49.163 Malloc7 00:20:49.163 Malloc8 00:20:49.163 Malloc9 00:20:49.163 Malloc10 00:20:49.163 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:49.163 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:49.163 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:49.163 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:49.423 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=2395421 00:20:49.423 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 2395421 /var/tmp/bdevperf.sock 00:20:49.423 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 2395421 ']' 00:20:49.423 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:49.423 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:20:49.423 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:20:49.423 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:49.423 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:49.423 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:49.423 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # config=() 00:20:49.423 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:49.423 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # local subsystem config 00:20:49.423 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:49.423 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:49.423 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:49.423 { 00:20:49.423 "params": { 00:20:49.423 "name": "Nvme$subsystem", 00:20:49.423 "trtype": "$TEST_TRANSPORT", 00:20:49.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.423 "adrfam": "ipv4", 00:20:49.423 "trsvcid": "$NVMF_PORT", 00:20:49.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.423 "hdgst": ${hdgst:-false}, 00:20:49.423 "ddgst": ${ddgst:-false} 00:20:49.423 }, 00:20:49.423 "method": "bdev_nvme_attach_controller" 00:20:49.423 } 00:20:49.423 EOF 00:20:49.423 )") 00:20:49.423 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:20:49.423 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:49.423 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:49.423 { 00:20:49.423 "params": { 00:20:49.423 "name": "Nvme$subsystem", 00:20:49.423 "trtype": "$TEST_TRANSPORT", 00:20:49.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.423 "adrfam": "ipv4", 00:20:49.423 "trsvcid": "$NVMF_PORT", 00:20:49.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.423 "hdgst": ${hdgst:-false}, 00:20:49.423 "ddgst": ${ddgst:-false} 00:20:49.423 }, 00:20:49.423 "method": "bdev_nvme_attach_controller" 00:20:49.423 } 00:20:49.423 EOF 00:20:49.423 )") 00:20:49.423 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:20:49.423 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:49.423 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:49.423 { 00:20:49.423 "params": { 00:20:49.423 "name": "Nvme$subsystem", 00:20:49.423 "trtype": "$TEST_TRANSPORT", 00:20:49.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.423 "adrfam": "ipv4", 00:20:49.423 "trsvcid": "$NVMF_PORT", 00:20:49.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.423 "hdgst": ${hdgst:-false}, 00:20:49.423 "ddgst": ${ddgst:-false} 00:20:49.423 }, 00:20:49.423 "method": "bdev_nvme_attach_controller" 00:20:49.423 } 00:20:49.423 EOF 00:20:49.423 )") 00:20:49.423 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:20:49.423 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:49.423 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:49.423 { 00:20:49.423 "params": { 00:20:49.423 "name": "Nvme$subsystem", 00:20:49.423 "trtype": "$TEST_TRANSPORT", 00:20:49.423 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.423 "adrfam": "ipv4", 00:20:49.423 "trsvcid": "$NVMF_PORT", 00:20:49.423 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.423 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.423 "hdgst": ${hdgst:-false}, 00:20:49.423 "ddgst": ${ddgst:-false} 00:20:49.423 }, 00:20:49.424 "method": "bdev_nvme_attach_controller" 00:20:49.424 } 00:20:49.424 EOF 00:20:49.424 )") 00:20:49.424 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:20:49.424 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:49.424 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:49.424 { 00:20:49.424 "params": { 00:20:49.424 "name": "Nvme$subsystem", 00:20:49.424 "trtype": "$TEST_TRANSPORT", 00:20:49.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.424 "adrfam": "ipv4", 00:20:49.424 "trsvcid": "$NVMF_PORT", 00:20:49.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.424 "hdgst": ${hdgst:-false}, 00:20:49.424 "ddgst": ${ddgst:-false} 00:20:49.424 }, 00:20:49.424 "method": "bdev_nvme_attach_controller" 00:20:49.424 } 00:20:49.424 EOF 00:20:49.424 )") 00:20:49.424 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:20:49.424 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:49.424 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:49.424 { 00:20:49.424 "params": { 00:20:49.424 "name": "Nvme$subsystem", 00:20:49.424 "trtype": "$TEST_TRANSPORT", 00:20:49.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.424 "adrfam": "ipv4", 00:20:49.424 "trsvcid": "$NVMF_PORT", 00:20:49.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.424 "hdgst": ${hdgst:-false}, 00:20:49.424 "ddgst": ${ddgst:-false} 00:20:49.424 }, 00:20:49.424 "method": "bdev_nvme_attach_controller" 00:20:49.424 } 00:20:49.424 EOF 00:20:49.424 )") 00:20:49.424 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:20:49.424 [2024-11-20 09:05:05.257435] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:20:49.424 [2024-11-20 09:05:05.257482] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2395421 ] 00:20:49.424 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:49.424 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:49.424 { 00:20:49.424 "params": { 00:20:49.424 "name": "Nvme$subsystem", 00:20:49.424 "trtype": "$TEST_TRANSPORT", 00:20:49.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.424 "adrfam": "ipv4", 00:20:49.424 "trsvcid": "$NVMF_PORT", 00:20:49.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.424 "hdgst": ${hdgst:-false}, 00:20:49.424 "ddgst": ${ddgst:-false} 00:20:49.424 }, 00:20:49.424 "method": "bdev_nvme_attach_controller" 00:20:49.424 } 00:20:49.424 EOF 00:20:49.424 )") 00:20:49.424 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:20:49.424 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:49.424 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:49.424 { 00:20:49.424 "params": { 00:20:49.424 "name": "Nvme$subsystem", 00:20:49.424 "trtype": "$TEST_TRANSPORT", 00:20:49.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.424 "adrfam": "ipv4", 00:20:49.424 "trsvcid": "$NVMF_PORT", 00:20:49.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.424 "hdgst": ${hdgst:-false}, 00:20:49.424 "ddgst": ${ddgst:-false} 00:20:49.424 }, 00:20:49.424 "method": "bdev_nvme_attach_controller" 00:20:49.424 } 00:20:49.424 EOF 00:20:49.424 )") 00:20:49.424 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:20:49.424 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:49.424 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:49.424 { 00:20:49.424 "params": { 00:20:49.424 "name": "Nvme$subsystem", 00:20:49.424 "trtype": "$TEST_TRANSPORT", 00:20:49.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.424 "adrfam": "ipv4", 00:20:49.424 "trsvcid": "$NVMF_PORT", 00:20:49.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.424 "hdgst": ${hdgst:-false}, 00:20:49.424 "ddgst": ${ddgst:-false} 00:20:49.424 }, 00:20:49.424 "method": "bdev_nvme_attach_controller" 00:20:49.424 } 00:20:49.424 EOF 00:20:49.424 )") 00:20:49.424 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:20:49.424 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:20:49.424 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:20:49.424 { 00:20:49.424 "params": { 00:20:49.424 "name": "Nvme$subsystem", 00:20:49.424 "trtype": "$TEST_TRANSPORT", 00:20:49.424 "traddr": "$NVMF_FIRST_TARGET_IP", 00:20:49.424 "adrfam": "ipv4", 00:20:49.424 "trsvcid": "$NVMF_PORT", 00:20:49.424 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:20:49.424 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:20:49.424 "hdgst": ${hdgst:-false}, 00:20:49.424 "ddgst": ${ddgst:-false} 00:20:49.424 }, 00:20:49.424 "method": "bdev_nvme_attach_controller" 00:20:49.424 } 00:20:49.424 EOF 00:20:49.424 )") 00:20:49.424 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:20:49.424 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # jq . 00:20:49.424 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@397 -- # IFS=, 00:20:49.424 09:05:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:20:49.424 "params": { 00:20:49.424 "name": "Nvme1", 00:20:49.424 "trtype": "tcp", 00:20:49.424 "traddr": "10.0.0.2", 00:20:49.424 "adrfam": "ipv4", 00:20:49.424 "trsvcid": "4420", 00:20:49.424 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:49.424 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:20:49.424 "hdgst": false, 00:20:49.424 "ddgst": false 00:20:49.424 }, 00:20:49.424 "method": "bdev_nvme_attach_controller" 00:20:49.424 },{ 00:20:49.424 "params": { 00:20:49.424 "name": "Nvme2", 00:20:49.424 "trtype": "tcp", 00:20:49.424 "traddr": "10.0.0.2", 00:20:49.424 "adrfam": "ipv4", 00:20:49.424 "trsvcid": "4420", 00:20:49.424 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:49.424 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:20:49.424 "hdgst": false, 00:20:49.424 "ddgst": false 00:20:49.424 }, 00:20:49.424 "method": "bdev_nvme_attach_controller" 00:20:49.424 },{ 00:20:49.424 "params": { 00:20:49.424 "name": "Nvme3", 00:20:49.424 "trtype": "tcp", 00:20:49.424 "traddr": "10.0.0.2", 00:20:49.424 "adrfam": "ipv4", 00:20:49.424 "trsvcid": "4420", 00:20:49.424 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:20:49.425 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:20:49.425 "hdgst": false, 00:20:49.425 "ddgst": false 00:20:49.425 }, 00:20:49.425 "method": "bdev_nvme_attach_controller" 00:20:49.425 },{ 00:20:49.425 "params": { 00:20:49.425 "name": "Nvme4", 00:20:49.425 "trtype": "tcp", 00:20:49.425 "traddr": "10.0.0.2", 00:20:49.425 "adrfam": "ipv4", 00:20:49.425 "trsvcid": "4420", 00:20:49.425 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:20:49.425 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:20:49.425 "hdgst": false, 00:20:49.425 "ddgst": false 00:20:49.425 }, 00:20:49.425 "method": "bdev_nvme_attach_controller" 00:20:49.425 },{ 00:20:49.425 "params": { 00:20:49.425 "name": "Nvme5", 00:20:49.425 "trtype": "tcp", 00:20:49.425 "traddr": "10.0.0.2", 00:20:49.425 "adrfam": "ipv4", 00:20:49.425 "trsvcid": "4420", 00:20:49.425 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:20:49.425 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:20:49.425 "hdgst": false, 00:20:49.425 "ddgst": false 00:20:49.425 }, 00:20:49.425 "method": "bdev_nvme_attach_controller" 00:20:49.425 },{ 00:20:49.425 "params": { 00:20:49.425 "name": "Nvme6", 00:20:49.425 "trtype": "tcp", 00:20:49.425 "traddr": "10.0.0.2", 00:20:49.425 "adrfam": "ipv4", 00:20:49.425 "trsvcid": "4420", 00:20:49.425 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:20:49.425 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:20:49.425 "hdgst": false, 00:20:49.425 "ddgst": false 00:20:49.425 }, 00:20:49.425 "method": "bdev_nvme_attach_controller" 00:20:49.425 },{ 00:20:49.425 "params": { 00:20:49.425 "name": "Nvme7", 00:20:49.425 "trtype": "tcp", 00:20:49.425 "traddr": "10.0.0.2", 00:20:49.425 "adrfam": "ipv4", 00:20:49.425 "trsvcid": "4420", 00:20:49.425 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:20:49.425 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:20:49.425 "hdgst": false, 00:20:49.425 "ddgst": false 00:20:49.425 }, 00:20:49.425 "method": "bdev_nvme_attach_controller" 00:20:49.425 },{ 00:20:49.425 "params": { 00:20:49.425 "name": "Nvme8", 00:20:49.425 "trtype": "tcp", 00:20:49.425 "traddr": "10.0.0.2", 00:20:49.425 "adrfam": "ipv4", 00:20:49.425 "trsvcid": "4420", 00:20:49.425 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:20:49.425 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:20:49.425 "hdgst": false, 00:20:49.425 "ddgst": false 00:20:49.425 }, 00:20:49.425 "method": "bdev_nvme_attach_controller" 00:20:49.425 },{ 00:20:49.425 "params": { 00:20:49.425 "name": "Nvme9", 00:20:49.425 "trtype": "tcp", 00:20:49.425 "traddr": "10.0.0.2", 00:20:49.425 "adrfam": "ipv4", 00:20:49.425 "trsvcid": "4420", 00:20:49.425 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:20:49.425 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:20:49.425 "hdgst": false, 00:20:49.425 "ddgst": false 00:20:49.425 }, 00:20:49.425 "method": "bdev_nvme_attach_controller" 00:20:49.425 },{ 00:20:49.425 "params": { 00:20:49.425 "name": "Nvme10", 00:20:49.425 "trtype": "tcp", 00:20:49.425 "traddr": "10.0.0.2", 00:20:49.425 "adrfam": "ipv4", 00:20:49.425 "trsvcid": "4420", 00:20:49.425 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:20:49.425 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:20:49.425 "hdgst": false, 00:20:49.425 "ddgst": false 00:20:49.425 }, 00:20:49.425 "method": "bdev_nvme_attach_controller" 00:20:49.425 }' 00:20:49.425 [2024-11-20 09:05:05.325754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:49.425 [2024-11-20 09:05:05.367288] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:51.330 Running I/O for 10 seconds... 00:20:51.330 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:51.330 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:20:51.330 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:51.330 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.330 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:51.330 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.330 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:51.330 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:51.330 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:51.330 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:20:51.330 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:20:51.330 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:20:51.330 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:20:51.330 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:51.330 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:51.330 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.330 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:51.330 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:51.330 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.330 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=3 00:20:51.330 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 3 -ge 100 ']' 00:20:51.330 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:20:51.589 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:20:51.589 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:20:51.589 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:51.589 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:20:51.589 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:51.589 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:51.589 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:51.589 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:20:51.589 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:20:51.589 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:20:51.589 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:20:51.589 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:20:51.589 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 2395136 00:20:51.589 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2395136 ']' 00:20:51.589 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2395136 00:20:51.589 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:20:51.589 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:51.589 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2395136 00:20:51.861 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:20:51.861 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:20:51.861 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2395136' 00:20:51.861 killing process with pid 2395136 00:20:51.861 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 2395136 00:20:51.861 09:05:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 2395136 00:20:51.861 [2024-11-20 09:05:07.632149] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.861 [2024-11-20 09:05:07.632206] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.861 [2024-11-20 09:05:07.632214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.861 [2024-11-20 09:05:07.632221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.861 [2024-11-20 09:05:07.632227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.861 [2024-11-20 09:05:07.632233] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.861 [2024-11-20 09:05:07.632239] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.861 [2024-11-20 09:05:07.632246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.861 [2024-11-20 09:05:07.632252] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.861 [2024-11-20 09:05:07.632258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.861 [2024-11-20 09:05:07.632270] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.861 [2024-11-20 09:05:07.632276] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.861 [2024-11-20 09:05:07.632283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.861 [2024-11-20 09:05:07.632289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632333] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632339] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632345] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632363] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632369] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632404] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632425] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632431] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632439] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632451] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632457] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632499] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632566] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632584] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.632590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dc700 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.634108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.862 [2024-11-20 09:05:07.634143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.862 [2024-11-20 09:05:07.634152] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.862 [2024-11-20 09:05:07.634159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.862 [2024-11-20 09:05:07.634167] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.862 [2024-11-20 09:05:07.634174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.862 [2024-11-20 09:05:07.634181] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.862 [2024-11-20 09:05:07.634188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.862 [2024-11-20 09:05:07.634194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331b0 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.634278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.634307] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.862 [2024-11-20 09:05:07.634315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634322] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634348] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634355] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634368] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634374] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634466] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634479] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634511] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634518] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634543] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634554] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634622] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.863 [2024-11-20 09:05:07.634684] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.864 [2024-11-20 09:05:07.634690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.864 [2024-11-20 09:05:07.634707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.864 [2024-11-20 09:05:07.634714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23df180 is same with the state(6) to be set 00:20:51.864 [2024-11-20 09:05:07.636358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.864 [2024-11-20 09:05:07.636380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.864 [2024-11-20 09:05:07.636396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.864 [2024-11-20 09:05:07.636403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.864 [2024-11-20 09:05:07.636413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.864 [2024-11-20 09:05:07.636420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.864 [2024-11-20 09:05:07.636429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.864 [2024-11-20 09:05:07.636436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.864 [2024-11-20 09:05:07.636444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.864 [2024-11-20 09:05:07.636454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.864 [2024-11-20 09:05:07.636463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.864 [2024-11-20 09:05:07.636469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.864 [2024-11-20 09:05:07.636478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.864 [2024-11-20 09:05:07.636485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.864 [2024-11-20 09:05:07.636494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.864 [2024-11-20 09:05:07.636500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.864 [2024-11-20 09:05:07.636509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.864 [2024-11-20 09:05:07.636515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.864 [2024-11-20 09:05:07.636523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.864 [2024-11-20 09:05:07.636530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.864 [2024-11-20 09:05:07.636538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.864 [2024-11-20 09:05:07.636545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.864 [2024-11-20 09:05:07.636553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.864 [2024-11-20 09:05:07.636560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.864 [2024-11-20 09:05:07.636568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.864 [2024-11-20 09:05:07.636574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.864 [2024-11-20 09:05:07.636583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.864 [2024-11-20 09:05:07.636589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.864 [2024-11-20 09:05:07.636597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.864 [2024-11-20 09:05:07.636608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.864 [2024-11-20 09:05:07.636617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.864 [2024-11-20 09:05:07.636623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.864 [2024-11-20 09:05:07.636632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.864 [2024-11-20 09:05:07.636640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.864 [2024-11-20 09:05:07.636649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.864 [2024-11-20 09:05:07.636656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.864 [2024-11-20 09:05:07.636665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.864 [2024-11-20 09:05:07.636671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.864 [2024-11-20 09:05:07.636680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.864 [2024-11-20 09:05:07.636688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.864 [2024-11-20 09:05:07.636696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.864 [2024-11-20 09:05:07.636702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.864 [2024-11-20 09:05:07.636711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.864 [2024-11-20 09:05:07.636718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.864 [2024-11-20 09:05:07.636726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.864 [2024-11-20 09:05:07.636732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.864 [2024-11-20 09:05:07.636741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.864 [2024-11-20 09:05:07.636748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.864 [2024-11-20 09:05:07.636756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.864 [2024-11-20 09:05:07.636763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.864 [2024-11-20 09:05:07.636771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.864 [2024-11-20 09:05:07.636777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.864 [2024-11-20 09:05:07.636786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.864 [2024-11-20 09:05:07.636793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.864 [2024-11-20 09:05:07.636801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.865 [2024-11-20 09:05:07.636808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.865 [2024-11-20 09:05:07.636816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.865 [2024-11-20 09:05:07.636822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.865 [2024-11-20 09:05:07.636831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.865 [2024-11-20 09:05:07.636839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.865 [2024-11-20 09:05:07.636847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.865 [2024-11-20 09:05:07.636855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.865 [2024-11-20 09:05:07.636864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.865 [2024-11-20 09:05:07.636870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.865 [2024-11-20 09:05:07.636879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.865 [2024-11-20 09:05:07.636885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.865 [2024-11-20 09:05:07.636893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.865 [2024-11-20 09:05:07.636900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.865 [2024-11-20 09:05:07.636908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.865 [2024-11-20 09:05:07.636915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.865 [2024-11-20 09:05:07.636925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.865 [2024-11-20 09:05:07.636931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.865 [2024-11-20 09:05:07.636939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.865 [2024-11-20 09:05:07.636952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.865 [2024-11-20 09:05:07.636962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.865 [2024-11-20 09:05:07.636968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.865 [2024-11-20 09:05:07.636976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.865 [2024-11-20 09:05:07.636973] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with [2024-11-20 09:05:07.636983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:51.865 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.865 [2024-11-20 09:05:07.636996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.865 [2024-11-20 09:05:07.637000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.865 [2024-11-20 09:05:07.637003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.865 [2024-11-20 09:05:07.637010] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.865 [2024-11-20 09:05:07.637013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:1[2024-11-20 09:05:07.637018] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.865 the state(6) to be set 00:20:51.865 [2024-11-20 09:05:07.637027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 09:05:07.637028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.865 the state(6) to be set 00:20:51.865 [2024-11-20 09:05:07.637036] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.865 [2024-11-20 09:05:07.637037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.865 [2024-11-20 09:05:07.637042] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.865 [2024-11-20 09:05:07.637045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.865 [2024-11-20 09:05:07.637050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.865 [2024-11-20 09:05:07.637054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.865 [2024-11-20 09:05:07.637060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.865 [2024-11-20 09:05:07.637062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.865 [2024-11-20 09:05:07.637067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.865 [2024-11-20 09:05:07.637072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.865 [2024-11-20 09:05:07.637074] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.865 [2024-11-20 09:05:07.637079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.865 [2024-11-20 09:05:07.637082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.865 [2024-11-20 09:05:07.637089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with [2024-11-20 09:05:07.637089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:1the state(6) to be set 00:20:51.865 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.865 [2024-11-20 09:05:07.637098] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.865 [2024-11-20 09:05:07.637099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.865 [2024-11-20 09:05:07.637106] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.865 [2024-11-20 09:05:07.637108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.865 [2024-11-20 09:05:07.637113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.865 [2024-11-20 09:05:07.637116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.865 [2024-11-20 09:05:07.637121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.865 [2024-11-20 09:05:07.637125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.865 [2024-11-20 09:05:07.637128] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.865 [2024-11-20 09:05:07.637135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 09:05:07.637136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.866 the state(6) to be set 00:20:51.866 [2024-11-20 09:05:07.637145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.866 [2024-11-20 09:05:07.637146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.866 [2024-11-20 09:05:07.637151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.866 [2024-11-20 09:05:07.637154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.866 [2024-11-20 09:05:07.637158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.866 [2024-11-20 09:05:07.637163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.866 [2024-11-20 09:05:07.637165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.866 [2024-11-20 09:05:07.637171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.866 [2024-11-20 09:05:07.637172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.866 [2024-11-20 09:05:07.637180] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.866 [2024-11-20 09:05:07.637180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.866 [2024-11-20 09:05:07.637186] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.866 [2024-11-20 09:05:07.637188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.866 [2024-11-20 09:05:07.637193] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.866 [2024-11-20 09:05:07.637197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.866 [2024-11-20 09:05:07.637200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.866 [2024-11-20 09:05:07.637205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.866 [2024-11-20 09:05:07.637208] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.866 [2024-11-20 09:05:07.637215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.866 [2024-11-20 09:05:07.637217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.866 [2024-11-20 09:05:07.637221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.866 [2024-11-20 09:05:07.637224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.866 [2024-11-20 09:05:07.637228] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.866 [2024-11-20 09:05:07.637233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.866 [2024-11-20 09:05:07.637240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.866 [2024-11-20 09:05:07.637241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.866 [2024-11-20 09:05:07.637250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.866 [2024-11-20 09:05:07.637256] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with [2024-11-20 09:05:07.637257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:51.866 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.866 [2024-11-20 09:05:07.637265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.866 [2024-11-20 09:05:07.637267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.866 [2024-11-20 09:05:07.637272] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.866 [2024-11-20 09:05:07.637275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.866 [2024-11-20 09:05:07.637279] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.866 [2024-11-20 09:05:07.637284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.866 [2024-11-20 09:05:07.637286] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.866 [2024-11-20 09:05:07.637291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.866 [2024-11-20 09:05:07.637293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.866 [2024-11-20 09:05:07.637300] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with [2024-11-20 09:05:07.637300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:1the state(6) to be set 00:20:51.866 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.866 [2024-11-20 09:05:07.637309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.866 [2024-11-20 09:05:07.637311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.866 [2024-11-20 09:05:07.637317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.866 [2024-11-20 09:05:07.637320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.866 [2024-11-20 09:05:07.637324] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.866 [2024-11-20 09:05:07.637327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.866 [2024-11-20 09:05:07.637331] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.866 [2024-11-20 09:05:07.637336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.866 [2024-11-20 09:05:07.637338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with [2024-11-20 09:05:07.637344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:20:51.866 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.866 [2024-11-20 09:05:07.637352] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.866 [2024-11-20 09:05:07.637355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.866 [2024-11-20 09:05:07.637362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 09:05:07.637362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.866 the state(6) to be set 00:20:51.866 [2024-11-20 09:05:07.637371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.866 [2024-11-20 09:05:07.637372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.866 [2024-11-20 09:05:07.637378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.866 [2024-11-20 09:05:07.637380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.867 [2024-11-20 09:05:07.637385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.637389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.867 [2024-11-20 09:05:07.637392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.637396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.867 [2024-11-20 09:05:07.637399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.637405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128[2024-11-20 09:05:07.637406] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.867 the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.637414] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.637415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.867 [2024-11-20 09:05:07.637421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.637425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.867 [2024-11-20 09:05:07.637428] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.637432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.867 [2024-11-20 09:05:07.637435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.637442] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.637448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.637454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.637460] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.637467] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.637473] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd0c0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.639278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:51.867 [2024-11-20 09:05:07.639332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1432d50 (9): Bad file descriptor 00:20:51.867 [2024-11-20 09:05:07.639392] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:51.867 [2024-11-20 09:05:07.639581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.639605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.639613] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.639619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.639626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.639632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.639639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.639645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.639651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.639658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.639664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.639671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.639677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.639683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.639689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.639695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.639701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.639707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.639714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.639721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.639727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.639733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.639743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.639749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.639755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.867 [2024-11-20 09:05:07.639762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639803] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:51.868 [2024-11-20 09:05:07.639812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639858] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639870] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639922] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639928] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639934] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639940] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639951] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639958] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639964] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639970] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639976] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639982] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639989] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.639994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.640000] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd5b0 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.640529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.868 [2024-11-20 09:05:07.640552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1432d50 with addr=10.0.0.2, port=4420 00:20:51.868 [2024-11-20 09:05:07.640560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d50 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.640918] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1432d50 (9): Bad file descriptor 00:20:51.868 [2024-11-20 09:05:07.640993] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:51.868 [2024-11-20 09:05:07.641293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:51.868 [2024-11-20 09:05:07.641305] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:51.868 [2024-11-20 09:05:07.641313] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:51.868 [2024-11-20 09:05:07.641322] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:51.868 [2024-11-20 09:05:07.642491] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.642512] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.642519] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.642526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.642532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.642542] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.642548] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.642556] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.642563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.642569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.642575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.642581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.642586] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.868 [2024-11-20 09:05:07.642592] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642715] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642721] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642727] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642733] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.642823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dd930 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.643052] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:51.869 [2024-11-20 09:05:07.643642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.643661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.643668] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.643675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.643681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.643688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.643698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.643704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.643711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.643717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.643724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.643730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.643736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.643742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.643749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.643755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.643762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.643768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.643774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.643780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.869 [2024-11-20 09:05:07.643787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643836] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643842] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643848] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643861] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643876] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643901] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643907] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643913] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643931] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643938] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643944] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643963] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643969] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643975] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643981] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643987] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643993] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.643999] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.644005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.644011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.644016] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.644022] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.644028] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.644034] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.644046] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.644053] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.644060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23dde00 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.644244] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.870 [2024-11-20 09:05:07.644263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.870 [2024-11-20 09:05:07.644273] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.870 [2024-11-20 09:05:07.644280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.870 [2024-11-20 09:05:07.644287] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.870 [2024-11-20 09:05:07.644294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.870 [2024-11-20 09:05:07.644302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.870 [2024-11-20 09:05:07.644309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.870 [2024-11-20 09:05:07.644316] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c110 is same with the state(6) to be set 00:20:51.870 [2024-11-20 09:05:07.644342] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.870 [2024-11-20 09:05:07.644350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.870 [2024-11-20 09:05:07.644357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.870 [2024-11-20 09:05:07.644364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.870 [2024-11-20 09:05:07.644372] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.870 [2024-11-20 09:05:07.644378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.870 [2024-11-20 09:05:07.644388] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.871 [2024-11-20 09:05:07.644394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.871 [2024-11-20 09:05:07.644401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1870350 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.644428] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.871 [2024-11-20 09:05:07.644436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.871 [2024-11-20 09:05:07.644443] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.871 [2024-11-20 09:05:07.644450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.871 [2024-11-20 09:05:07.644457] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.871 [2024-11-20 09:05:07.644467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.871 [2024-11-20 09:05:07.644475] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.871 [2024-11-20 09:05:07.644481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.871 [2024-11-20 09:05:07.644488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x185d450 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.644525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.871 [2024-11-20 09:05:07.644534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.871 [2024-11-20 09:05:07.644541] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.871 [2024-11-20 09:05:07.644548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.871 [2024-11-20 09:05:07.644555] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.871 [2024-11-20 09:05:07.644562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.871 [2024-11-20 09:05:07.644569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.871 [2024-11-20 09:05:07.644575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.871 [2024-11-20 09:05:07.644581] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x185e7d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.644606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14331b0 (9): Bad file descriptor 00:20:51.871 [2024-11-20 09:05:07.644898] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:51.871 [2024-11-20 09:05:07.645059] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645075] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645086] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645093] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645099] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645105] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645111] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645118] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645125] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645131] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645137] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645143] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645152] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645189] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645196] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645201] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645207] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645214] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645220] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645226] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645238] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645244] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645250] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645258] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645291] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645297] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.871 [2024-11-20 09:05:07.645303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.872 [2024-11-20 09:05:07.645309] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.872 [2024-11-20 09:05:07.645315] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.872 [2024-11-20 09:05:07.645323] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.872 [2024-11-20 09:05:07.645329] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.872 [2024-11-20 09:05:07.645335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.872 [2024-11-20 09:05:07.645341] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.872 [2024-11-20 09:05:07.645347] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.872 [2024-11-20 09:05:07.645353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.872 [2024-11-20 09:05:07.645359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.872 [2024-11-20 09:05:07.645364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.872 [2024-11-20 09:05:07.645370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.872 [2024-11-20 09:05:07.645376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.872 [2024-11-20 09:05:07.645383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.872 [2024-11-20 09:05:07.645389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.872 [2024-11-20 09:05:07.645395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.872 [2024-11-20 09:05:07.645401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.872 [2024-11-20 09:05:07.645407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.872 [2024-11-20 09:05:07.645413] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.872 [2024-11-20 09:05:07.645587] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:51.872 [2024-11-20 09:05:07.649941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:51.872 [2024-11-20 09:05:07.650138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.872 [2024-11-20 09:05:07.650154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1432d50 with addr=10.0.0.2, port=4420 00:20:51.872 [2024-11-20 09:05:07.650162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d50 is same with the state(6) to be set 00:20:51.872 [2024-11-20 09:05:07.650200] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1432d50 (9): Bad file descriptor 00:20:51.872 [2024-11-20 09:05:07.650235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:51.872 [2024-11-20 09:05:07.650242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:51.872 [2024-11-20 09:05:07.650250] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:51.872 [2024-11-20 09:05:07.650258] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:51.872 [2024-11-20 09:05:07.654267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x142c110 (9): Bad file descriptor 00:20:51.872 [2024-11-20 09:05:07.654285] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1870350 (9): Bad file descriptor 00:20:51.872 [2024-11-20 09:05:07.654304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x185d450 (9): Bad file descriptor 00:20:51.872 [2024-11-20 09:05:07.654331] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.872 [2024-11-20 09:05:07.654340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.872 [2024-11-20 09:05:07.654347] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.872 [2024-11-20 09:05:07.654354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.872 [2024-11-20 09:05:07.654361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.872 [2024-11-20 09:05:07.654368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.872 [2024-11-20 09:05:07.654375] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.872 [2024-11-20 09:05:07.654382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.872 [2024-11-20 09:05:07.654388] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347610 is same with the state(6) to be set 00:20:51.872 [2024-11-20 09:05:07.654410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x185e7d0 (9): Bad file descriptor 00:20:51.872 [2024-11-20 09:05:07.654448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.872 [2024-11-20 09:05:07.654457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.872 [2024-11-20 09:05:07.654464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.872 [2024-11-20 09:05:07.654470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.872 [2024-11-20 09:05:07.654477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.872 [2024-11-20 09:05:07.654484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.872 [2024-11-20 09:05:07.654491] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.872 [2024-11-20 09:05:07.654497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.872 [2024-11-20 09:05:07.654503] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18705a0 is same with the state(6) to be set 00:20:51.872 [2024-11-20 09:05:07.654608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.872 [2024-11-20 09:05:07.654617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.872 [2024-11-20 09:05:07.654628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.872 [2024-11-20 09:05:07.654635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.872 [2024-11-20 09:05:07.654644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.872 [2024-11-20 09:05:07.654651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.872 [2024-11-20 09:05:07.654662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.872 [2024-11-20 09:05:07.654669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.872 [2024-11-20 09:05:07.654677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.873 [2024-11-20 09:05:07.654684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.873 [2024-11-20 09:05:07.654692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.873 [2024-11-20 09:05:07.654698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.873 [2024-11-20 09:05:07.654707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.873 [2024-11-20 09:05:07.654713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.873 [2024-11-20 09:05:07.654722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.873 [2024-11-20 09:05:07.654728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.873 [2024-11-20 09:05:07.654736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.873 [2024-11-20 09:05:07.654742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.873 [2024-11-20 09:05:07.654750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.873 [2024-11-20 09:05:07.654757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.873 [2024-11-20 09:05:07.654765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.873 [2024-11-20 09:05:07.654772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.873 [2024-11-20 09:05:07.654780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.873 [2024-11-20 09:05:07.654786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.873 [2024-11-20 09:05:07.654794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.873 [2024-11-20 09:05:07.654801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.873 [2024-11-20 09:05:07.654809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.873 [2024-11-20 09:05:07.654815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.873 [2024-11-20 09:05:07.654823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.873 [2024-11-20 09:05:07.654830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.873 [2024-11-20 09:05:07.654839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.873 [2024-11-20 09:05:07.654848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.873 [2024-11-20 09:05:07.654856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.873 [2024-11-20 09:05:07.654863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.873 [2024-11-20 09:05:07.654871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.873 [2024-11-20 09:05:07.654878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.873 [2024-11-20 09:05:07.654888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.873 [2024-11-20 09:05:07.654895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.873 [2024-11-20 09:05:07.654903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.873 [2024-11-20 09:05:07.654909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.873 [2024-11-20 09:05:07.654917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.873 [2024-11-20 09:05:07.654925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.873 [2024-11-20 09:05:07.654933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.873 [2024-11-20 09:05:07.654940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.873 [2024-11-20 09:05:07.654953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.873 [2024-11-20 09:05:07.654960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.873 [2024-11-20 09:05:07.654969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.873 [2024-11-20 09:05:07.654975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.873 [2024-11-20 09:05:07.654984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.873 [2024-11-20 09:05:07.654990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.873 [2024-11-20 09:05:07.654999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.873 [2024-11-20 09:05:07.655005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.873 [2024-11-20 09:05:07.655014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.873 [2024-11-20 09:05:07.655021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.873 [2024-11-20 09:05:07.655030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.873 [2024-11-20 09:05:07.655037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.873 [2024-11-20 09:05:07.655049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.873 [2024-11-20 09:05:07.655056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.873 [2024-11-20 09:05:07.655064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.873 [2024-11-20 09:05:07.655071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.873 [2024-11-20 09:05:07.655079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.873 [2024-11-20 09:05:07.655086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.873 [2024-11-20 09:05:07.655094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.873 [2024-11-20 09:05:07.655101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.873 [2024-11-20 09:05:07.655110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.873 [2024-11-20 09:05:07.655116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.874 [2024-11-20 09:05:07.655124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.874 [2024-11-20 09:05:07.655130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.874 [2024-11-20 09:05:07.655140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.874 [2024-11-20 09:05:07.655146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.874 [2024-11-20 09:05:07.655154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.874 [2024-11-20 09:05:07.655161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.874 [2024-11-20 09:05:07.655169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.874 [2024-11-20 09:05:07.655176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.874 [2024-11-20 09:05:07.655184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.874 [2024-11-20 09:05:07.655190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.874 [2024-11-20 09:05:07.655199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.874 [2024-11-20 09:05:07.655205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.874 [2024-11-20 09:05:07.655213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.874 [2024-11-20 09:05:07.655220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.874 [2024-11-20 09:05:07.655228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.874 [2024-11-20 09:05:07.655236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.874 [2024-11-20 09:05:07.655245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.874 [2024-11-20 09:05:07.655253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.874 [2024-11-20 09:05:07.655262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.874 [2024-11-20 09:05:07.655268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.874 [2024-11-20 09:05:07.655277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.874 [2024-11-20 09:05:07.655283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.874 [2024-11-20 09:05:07.655291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.874 [2024-11-20 09:05:07.655298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.874 [2024-11-20 09:05:07.655306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.874 [2024-11-20 09:05:07.655312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.874 [2024-11-20 09:05:07.655321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.874 [2024-11-20 09:05:07.655328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.874 [2024-11-20 09:05:07.655337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.874 [2024-11-20 09:05:07.655345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.874 [2024-11-20 09:05:07.655353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.874 [2024-11-20 09:05:07.655360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.874 [2024-11-20 09:05:07.655368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.874 [2024-11-20 09:05:07.655375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.874 [2024-11-20 09:05:07.655385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.874 [2024-11-20 09:05:07.655391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.874 [2024-11-20 09:05:07.655400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.874 [2024-11-20 09:05:07.655407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.874 [2024-11-20 09:05:07.655416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.874 [2024-11-20 09:05:07.655424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.874 [2024-11-20 09:05:07.655434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.874 [2024-11-20 09:05:07.655440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.874 [2024-11-20 09:05:07.655449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.874 [2024-11-20 09:05:07.655455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.874 [2024-11-20 09:05:07.655463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.874 [2024-11-20 09:05:07.655470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.875 [2024-11-20 09:05:07.655479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.875 [2024-11-20 09:05:07.655487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.875 [2024-11-20 09:05:07.655496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.875 [2024-11-20 09:05:07.655502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.875 [2024-11-20 09:05:07.655510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.875 [2024-11-20 09:05:07.655517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.875 [2024-11-20 09:05:07.655525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.875 [2024-11-20 09:05:07.655532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.875 [2024-11-20 09:05:07.655541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.875 [2024-11-20 09:05:07.655548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.875 [2024-11-20 09:05:07.655557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.875 [2024-11-20 09:05:07.655564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.875 [2024-11-20 09:05:07.655572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.875 [2024-11-20 09:05:07.655583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.875 [2024-11-20 09:05:07.655591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.875 [2024-11-20 09:05:07.655597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.875 [2024-11-20 09:05:07.655605] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1637450 is same with the state(6) to be set 00:20:51.875 [2024-11-20 09:05:07.655761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.875 [2024-11-20 09:05:07.655774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.875 [2024-11-20 09:05:07.655786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.875 [2024-11-20 09:05:07.655795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.875 [2024-11-20 09:05:07.655804] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.875 [2024-11-20 09:05:07.655814] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.875 [2024-11-20 09:05:07.655823] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.875 [2024-11-20 09:05:07.655831] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23de2d0 is same with the state(6) to be set 00:20:51.875 [2024-11-20 09:05:07.656617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.875 [2024-11-20 09:05:07.656631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.875 [2024-11-20 09:05:07.656643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.875 [2024-11-20 09:05:07.656651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.875 [2024-11-20 09:05:07.656661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.875 [2024-11-20 09:05:07.656668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.875 [2024-11-20 09:05:07.656676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.875 [2024-11-20 09:05:07.656683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.875 [2024-11-20 09:05:07.656692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.875 [2024-11-20 09:05:07.656698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.875 [2024-11-20 09:05:07.656706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.875 [2024-11-20 09:05:07.656713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.875 [2024-11-20 09:05:07.656725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.875 [2024-11-20 09:05:07.656733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.875 [2024-11-20 09:05:07.656742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.875 [2024-11-20 09:05:07.656749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.875 [2024-11-20 09:05:07.656757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.875 [2024-11-20 09:05:07.656764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.875 [2024-11-20 09:05:07.656774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.875 [2024-11-20 09:05:07.656781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.875 [2024-11-20 09:05:07.656793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.875 [2024-11-20 09:05:07.656800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.875 [2024-11-20 09:05:07.656810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.875 [2024-11-20 09:05:07.656817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.875 [2024-11-20 09:05:07.656826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.875 [2024-11-20 09:05:07.656833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.875 [2024-11-20 09:05:07.656841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.875 [2024-11-20 09:05:07.656849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.875 [2024-11-20 09:05:07.656858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.875 [2024-11-20 09:05:07.656864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.875 [2024-11-20 09:05:07.656873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.876 [2024-11-20 09:05:07.656880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.876 [2024-11-20 09:05:07.656890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.876 [2024-11-20 09:05:07.656897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.876 [2024-11-20 09:05:07.656909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.876 [2024-11-20 09:05:07.656916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.876 [2024-11-20 09:05:07.656926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.876 [2024-11-20 09:05:07.656933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.876 [2024-11-20 09:05:07.656941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.876 [2024-11-20 09:05:07.656953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.876 [2024-11-20 09:05:07.656962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.876 [2024-11-20 09:05:07.656970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.876 [2024-11-20 09:05:07.656979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.876 [2024-11-20 09:05:07.656985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.876 [2024-11-20 09:05:07.656994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.876 [2024-11-20 09:05:07.657003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.876 [2024-11-20 09:05:07.657012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.876 [2024-11-20 09:05:07.657018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.876 [2024-11-20 09:05:07.657027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.876 [2024-11-20 09:05:07.657033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.876 [2024-11-20 09:05:07.657041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.876 [2024-11-20 09:05:07.657049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.876 [2024-11-20 09:05:07.657058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.876 [2024-11-20 09:05:07.657064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.876 [2024-11-20 09:05:07.657072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.876 [2024-11-20 09:05:07.657079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.876 [2024-11-20 09:05:07.657087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.876 [2024-11-20 09:05:07.657093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.876 [2024-11-20 09:05:07.657101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.876 [2024-11-20 09:05:07.657108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.876 [2024-11-20 09:05:07.657117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.876 [2024-11-20 09:05:07.657125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.876 [2024-11-20 09:05:07.657132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.876 [2024-11-20 09:05:07.657139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.876 [2024-11-20 09:05:07.657147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.876 [2024-11-20 09:05:07.657154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.876 [2024-11-20 09:05:07.657164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.876 [2024-11-20 09:05:07.657171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.876 [2024-11-20 09:05:07.657180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.876 [2024-11-20 09:05:07.657186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.876 [2024-11-20 09:05:07.657196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.876 [2024-11-20 09:05:07.657204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.876 [2024-11-20 09:05:07.657213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.876 [2024-11-20 09:05:07.657219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.876 [2024-11-20 09:05:07.657227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.876 [2024-11-20 09:05:07.657235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.876 [2024-11-20 09:05:07.657244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.876 [2024-11-20 09:05:07.657251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.876 [2024-11-20 09:05:07.657259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.876 [2024-11-20 09:05:07.657266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.876 [2024-11-20 09:05:07.657275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.876 [2024-11-20 09:05:07.657282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.876 [2024-11-20 09:05:07.657290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.876 [2024-11-20 09:05:07.657296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.876 [2024-11-20 09:05:07.657304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.876 [2024-11-20 09:05:07.657312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.876 [2024-11-20 09:05:07.657321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.877 [2024-11-20 09:05:07.657327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.877 [2024-11-20 09:05:07.657335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.877 [2024-11-20 09:05:07.657342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.877 [2024-11-20 09:05:07.657350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.877 [2024-11-20 09:05:07.657357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.877 [2024-11-20 09:05:07.657365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.877 [2024-11-20 09:05:07.657372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.877 [2024-11-20 09:05:07.657380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.877 [2024-11-20 09:05:07.657389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.877 [2024-11-20 09:05:07.657398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.877 [2024-11-20 09:05:07.657405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.877 [2024-11-20 09:05:07.657414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.877 [2024-11-20 09:05:07.657420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.877 [2024-11-20 09:05:07.657430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.877 [2024-11-20 09:05:07.657436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.877 [2024-11-20 09:05:07.657446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.877 [2024-11-20 09:05:07.657452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.877 [2024-11-20 09:05:07.657462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.877 [2024-11-20 09:05:07.657468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.877 [2024-11-20 09:05:07.657477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.877 [2024-11-20 09:05:07.657483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.877 [2024-11-20 09:05:07.657491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.877 [2024-11-20 09:05:07.657497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.877 [2024-11-20 09:05:07.657506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.877 [2024-11-20 09:05:07.657513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.877 [2024-11-20 09:05:07.657521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.877 [2024-11-20 09:05:07.657528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.877 [2024-11-20 09:05:07.657536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.877 [2024-11-20 09:05:07.657542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.877 [2024-11-20 09:05:07.657552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.877 [2024-11-20 09:05:07.657559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.877 [2024-11-20 09:05:07.663603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.877 [2024-11-20 09:05:07.663613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.877 [2024-11-20 09:05:07.663627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.877 [2024-11-20 09:05:07.663633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.877 [2024-11-20 09:05:07.663642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.877 [2024-11-20 09:05:07.663648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.877 [2024-11-20 09:05:07.663657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.877 [2024-11-20 09:05:07.663664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.877 [2024-11-20 09:05:07.663672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.877 [2024-11-20 09:05:07.663678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.877 [2024-11-20 09:05:07.663686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1838900 is same with the state(6) to be set 00:20:51.877 [2024-11-20 09:05:07.663751] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:51.877 [2024-11-20 09:05:07.664767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:51.877 [2024-11-20 09:05:07.664786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18705a0 (9): Bad file descriptor 00:20:51.877 [2024-11-20 09:05:07.665043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.877 [2024-11-20 09:05:07.665057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14331b0 with addr=10.0.0.2, port=4420 00:20:51.877 [2024-11-20 09:05:07.665066] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331b0 is same with the state(6) to be set 00:20:51.877 [2024-11-20 09:05:07.665092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1347610 (9): Bad file descriptor 00:20:51.877 [2024-11-20 09:05:07.665125] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.877 [2024-11-20 09:05:07.665134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.877 [2024-11-20 09:05:07.665142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.877 [2024-11-20 09:05:07.665149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.877 [2024-11-20 09:05:07.665157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.877 [2024-11-20 09:05:07.665164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.877 [2024-11-20 09:05:07.665172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.877 [2024-11-20 09:05:07.665179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.877 [2024-11-20 09:05:07.665185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1875b70 is same with the state(6) to be set 00:20:51.878 [2024-11-20 09:05:07.665215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.878 [2024-11-20 09:05:07.665230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.878 [2024-11-20 09:05:07.665239] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.878 [2024-11-20 09:05:07.665247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.878 [2024-11-20 09:05:07.665255] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.878 [2024-11-20 09:05:07.665262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.878 [2024-11-20 09:05:07.665269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:51.878 [2024-11-20 09:05:07.665276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.878 [2024-11-20 09:05:07.665282] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1876c50 is same with the state(6) to be set 00:20:51.878 [2024-11-20 09:05:07.665561] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:51.878 [2024-11-20 09:05:07.665599] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14331b0 (9): Bad file descriptor 00:20:51.878 [2024-11-20 09:05:07.665653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.878 [2024-11-20 09:05:07.665676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.878 [2024-11-20 09:05:07.665692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.878 [2024-11-20 09:05:07.665701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.878 [2024-11-20 09:05:07.665712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.878 [2024-11-20 09:05:07.665722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.878 [2024-11-20 09:05:07.665735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.878 [2024-11-20 09:05:07.665745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.878 [2024-11-20 09:05:07.665757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.878 [2024-11-20 09:05:07.665766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.878 [2024-11-20 09:05:07.665777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.878 [2024-11-20 09:05:07.665786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.878 [2024-11-20 09:05:07.665798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.878 [2024-11-20 09:05:07.665808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.878 [2024-11-20 09:05:07.665820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.878 [2024-11-20 09:05:07.665829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.878 [2024-11-20 09:05:07.665844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.878 [2024-11-20 09:05:07.665853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.878 [2024-11-20 09:05:07.665865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.878 [2024-11-20 09:05:07.665874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.878 [2024-11-20 09:05:07.665886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.878 [2024-11-20 09:05:07.665896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.878 [2024-11-20 09:05:07.665908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.878 [2024-11-20 09:05:07.665917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.878 [2024-11-20 09:05:07.665928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.878 [2024-11-20 09:05:07.665937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.878 [2024-11-20 09:05:07.665960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.878 [2024-11-20 09:05:07.665970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.878 [2024-11-20 09:05:07.665981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.878 [2024-11-20 09:05:07.665991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.878 [2024-11-20 09:05:07.666003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.878 [2024-11-20 09:05:07.666012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.878 [2024-11-20 09:05:07.666024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.878 [2024-11-20 09:05:07.666034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.878 [2024-11-20 09:05:07.666046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.878 [2024-11-20 09:05:07.666055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.878 [2024-11-20 09:05:07.666068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.878 [2024-11-20 09:05:07.666077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.878 [2024-11-20 09:05:07.666089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.878 [2024-11-20 09:05:07.666099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.878 [2024-11-20 09:05:07.666110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.878 [2024-11-20 09:05:07.666121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.878 [2024-11-20 09:05:07.666134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.878 [2024-11-20 09:05:07.666144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.878 [2024-11-20 09:05:07.666155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.879 [2024-11-20 09:05:07.666165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.879 [2024-11-20 09:05:07.666177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.879 [2024-11-20 09:05:07.666186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.879 [2024-11-20 09:05:07.666199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.879 [2024-11-20 09:05:07.666208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.879 [2024-11-20 09:05:07.666219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.879 [2024-11-20 09:05:07.666229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.879 [2024-11-20 09:05:07.666241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.879 [2024-11-20 09:05:07.666250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.879 [2024-11-20 09:05:07.666262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.879 [2024-11-20 09:05:07.666271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.879 [2024-11-20 09:05:07.666283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.879 [2024-11-20 09:05:07.666292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.879 [2024-11-20 09:05:07.666304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.879 [2024-11-20 09:05:07.666313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.879 [2024-11-20 09:05:07.666326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.879 [2024-11-20 09:05:07.666335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.879 [2024-11-20 09:05:07.666347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.879 [2024-11-20 09:05:07.666357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.879 [2024-11-20 09:05:07.666369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.879 [2024-11-20 09:05:07.666378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.879 [2024-11-20 09:05:07.666392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.879 [2024-11-20 09:05:07.666401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.879 [2024-11-20 09:05:07.666413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.879 [2024-11-20 09:05:07.666422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.879 [2024-11-20 09:05:07.666434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.879 [2024-11-20 09:05:07.666444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.879 [2024-11-20 09:05:07.666455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.879 [2024-11-20 09:05:07.666465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.879 [2024-11-20 09:05:07.666478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.879 [2024-11-20 09:05:07.666487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.879 [2024-11-20 09:05:07.666498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.879 [2024-11-20 09:05:07.666509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.879 [2024-11-20 09:05:07.666522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.879 [2024-11-20 09:05:07.666531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.879 [2024-11-20 09:05:07.666542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.879 [2024-11-20 09:05:07.666552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.879 [2024-11-20 09:05:07.666564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.879 [2024-11-20 09:05:07.666573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.879 [2024-11-20 09:05:07.666585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.879 [2024-11-20 09:05:07.666595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.879 [2024-11-20 09:05:07.666607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.879 [2024-11-20 09:05:07.666615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.879 [2024-11-20 09:05:07.666627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.879 [2024-11-20 09:05:07.666637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.879 [2024-11-20 09:05:07.666648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.879 [2024-11-20 09:05:07.666659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.879 [2024-11-20 09:05:07.666671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.879 [2024-11-20 09:05:07.666680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.879 [2024-11-20 09:05:07.666692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.879 [2024-11-20 09:05:07.666702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.879 [2024-11-20 09:05:07.666714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.879 [2024-11-20 09:05:07.666723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.879 [2024-11-20 09:05:07.666735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.879 [2024-11-20 09:05:07.666743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.879 [2024-11-20 09:05:07.666755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.879 [2024-11-20 09:05:07.666764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.880 [2024-11-20 09:05:07.666777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.880 [2024-11-20 09:05:07.666786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.880 [2024-11-20 09:05:07.666798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.880 [2024-11-20 09:05:07.666808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.880 [2024-11-20 09:05:07.666819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.880 [2024-11-20 09:05:07.666828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.880 [2024-11-20 09:05:07.666839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.880 [2024-11-20 09:05:07.666850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.880 [2024-11-20 09:05:07.666862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.880 [2024-11-20 09:05:07.666872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.880 [2024-11-20 09:05:07.666883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.880 [2024-11-20 09:05:07.666892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.880 [2024-11-20 09:05:07.666904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.880 [2024-11-20 09:05:07.666914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.880 [2024-11-20 09:05:07.666928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.880 [2024-11-20 09:05:07.666938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.880 [2024-11-20 09:05:07.666953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.880 [2024-11-20 09:05:07.666962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.880 [2024-11-20 09:05:07.666973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.880 [2024-11-20 09:05:07.666984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.880 [2024-11-20 09:05:07.666996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.880 [2024-11-20 09:05:07.667005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.880 [2024-11-20 09:05:07.667017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.880 [2024-11-20 09:05:07.667026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.880 [2024-11-20 09:05:07.667038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.880 [2024-11-20 09:05:07.667047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.880 [2024-11-20 09:05:07.667058] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1957e10 is same with the state(6) to be set 00:20:51.880 [2024-11-20 09:05:07.668417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.880 [2024-11-20 09:05:07.668436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.880 [2024-11-20 09:05:07.668451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.880 [2024-11-20 09:05:07.668461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.880 [2024-11-20 09:05:07.668473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.880 [2024-11-20 09:05:07.668483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.880 [2024-11-20 09:05:07.668494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.880 [2024-11-20 09:05:07.668503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.880 [2024-11-20 09:05:07.668514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.880 [2024-11-20 09:05:07.668525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.880 [2024-11-20 09:05:07.668537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.880 [2024-11-20 09:05:07.668546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.880 [2024-11-20 09:05:07.668561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.880 [2024-11-20 09:05:07.668571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.880 [2024-11-20 09:05:07.668582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.880 [2024-11-20 09:05:07.668592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.880 [2024-11-20 09:05:07.668604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.880 [2024-11-20 09:05:07.668614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.880 [2024-11-20 09:05:07.668625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.880 [2024-11-20 09:05:07.668635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.880 [2024-11-20 09:05:07.668647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.881 [2024-11-20 09:05:07.668656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.881 [2024-11-20 09:05:07.668667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.881 [2024-11-20 09:05:07.668678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.881 [2024-11-20 09:05:07.668689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.881 [2024-11-20 09:05:07.668698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.881 [2024-11-20 09:05:07.668711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.881 [2024-11-20 09:05:07.668721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.881 [2024-11-20 09:05:07.668732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.881 [2024-11-20 09:05:07.668742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.881 [2024-11-20 09:05:07.668754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.881 [2024-11-20 09:05:07.668764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.881 [2024-11-20 09:05:07.668775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.881 [2024-11-20 09:05:07.668785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.881 [2024-11-20 09:05:07.668797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.881 [2024-11-20 09:05:07.668806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.881 [2024-11-20 09:05:07.668818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.881 [2024-11-20 09:05:07.668829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.881 [2024-11-20 09:05:07.668841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.881 [2024-11-20 09:05:07.668850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.881 [2024-11-20 09:05:07.668861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.881 [2024-11-20 09:05:07.668871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.881 [2024-11-20 09:05:07.668884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.881 [2024-11-20 09:05:07.668893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.881 [2024-11-20 09:05:07.668904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.881 [2024-11-20 09:05:07.668913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.881 [2024-11-20 09:05:07.668924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.881 [2024-11-20 09:05:07.668934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.881 [2024-11-20 09:05:07.668951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.881 [2024-11-20 09:05:07.668962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.881 [2024-11-20 09:05:07.668974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.881 [2024-11-20 09:05:07.668983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.881 [2024-11-20 09:05:07.668994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.881 [2024-11-20 09:05:07.669003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.881 [2024-11-20 09:05:07.669014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.881 [2024-11-20 09:05:07.669024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.881 [2024-11-20 09:05:07.669036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.881 [2024-11-20 09:05:07.669046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.881 [2024-11-20 09:05:07.669058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.881 [2024-11-20 09:05:07.669067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.881 [2024-11-20 09:05:07.669078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.881 [2024-11-20 09:05:07.669087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.881 [2024-11-20 09:05:07.669102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.881 [2024-11-20 09:05:07.669112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.881 [2024-11-20 09:05:07.669124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.881 [2024-11-20 09:05:07.669133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.881 [2024-11-20 09:05:07.669144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.881 [2024-11-20 09:05:07.669153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.881 [2024-11-20 09:05:07.669165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.881 [2024-11-20 09:05:07.669175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.881 [2024-11-20 09:05:07.669187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.881 [2024-11-20 09:05:07.669197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.881 [2024-11-20 09:05:07.669208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.881 [2024-11-20 09:05:07.669217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.881 [2024-11-20 09:05:07.669229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.881 [2024-11-20 09:05:07.669238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.881 [2024-11-20 09:05:07.669251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.881 [2024-11-20 09:05:07.669261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.882 [2024-11-20 09:05:07.669272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.882 [2024-11-20 09:05:07.669281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.882 [2024-11-20 09:05:07.669293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.882 [2024-11-20 09:05:07.669302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.882 [2024-11-20 09:05:07.669314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.882 [2024-11-20 09:05:07.669325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.882 [2024-11-20 09:05:07.669337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.882 [2024-11-20 09:05:07.669346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.882 [2024-11-20 09:05:07.669357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.882 [2024-11-20 09:05:07.669368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.882 [2024-11-20 09:05:07.669381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.882 [2024-11-20 09:05:07.669391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.882 [2024-11-20 09:05:07.669402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.882 [2024-11-20 09:05:07.669411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.882 [2024-11-20 09:05:07.669423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.882 [2024-11-20 09:05:07.669432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.882 [2024-11-20 09:05:07.669444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.882 [2024-11-20 09:05:07.669455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.882 [2024-11-20 09:05:07.669467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.882 [2024-11-20 09:05:07.669477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.882 [2024-11-20 09:05:07.669488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.882 [2024-11-20 09:05:07.669497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.882 [2024-11-20 09:05:07.669509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.882 [2024-11-20 09:05:07.669519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.882 [2024-11-20 09:05:07.669532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.882 [2024-11-20 09:05:07.669541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.882 [2024-11-20 09:05:07.669553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.882 [2024-11-20 09:05:07.669562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.882 [2024-11-20 09:05:07.669574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.882 [2024-11-20 09:05:07.669584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.882 [2024-11-20 09:05:07.669596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.882 [2024-11-20 09:05:07.669605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.882 [2024-11-20 09:05:07.669617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.882 [2024-11-20 09:05:07.669626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.882 [2024-11-20 09:05:07.669639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.882 [2024-11-20 09:05:07.669648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.882 [2024-11-20 09:05:07.669660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.882 [2024-11-20 09:05:07.669670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.882 [2024-11-20 09:05:07.669682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.882 [2024-11-20 09:05:07.669692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.882 [2024-11-20 09:05:07.669703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.882 [2024-11-20 09:05:07.669712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.882 [2024-11-20 09:05:07.669724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.882 [2024-11-20 09:05:07.669734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.882 [2024-11-20 09:05:07.669746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.882 [2024-11-20 09:05:07.669756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.882 [2024-11-20 09:05:07.669768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.882 [2024-11-20 09:05:07.669776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.882 [2024-11-20 09:05:07.669788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.882 [2024-11-20 09:05:07.669797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.882 [2024-11-20 09:05:07.669808] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1958bd0 is same with the state(6) to be set 00:20:51.882 [2024-11-20 09:05:07.671161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.882 [2024-11-20 09:05:07.671181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.882 [2024-11-20 09:05:07.671196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.882 [2024-11-20 09:05:07.671207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.882 [2024-11-20 09:05:07.671219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.883 [2024-11-20 09:05:07.671229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.883 [2024-11-20 09:05:07.671240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.883 [2024-11-20 09:05:07.671249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.883 [2024-11-20 09:05:07.671265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.883 [2024-11-20 09:05:07.671275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.883 [2024-11-20 09:05:07.671288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.883 [2024-11-20 09:05:07.671297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.883 [2024-11-20 09:05:07.671309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.883 [2024-11-20 09:05:07.671318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.883 [2024-11-20 09:05:07.671331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.883 [2024-11-20 09:05:07.671341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.883 [2024-11-20 09:05:07.671353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.883 [2024-11-20 09:05:07.671362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.883 [2024-11-20 09:05:07.671374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.883 [2024-11-20 09:05:07.671383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.883 [2024-11-20 09:05:07.671395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.883 [2024-11-20 09:05:07.671405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.883 [2024-11-20 09:05:07.671417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.883 [2024-11-20 09:05:07.671427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.883 [2024-11-20 09:05:07.671438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.883 [2024-11-20 09:05:07.671447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.883 [2024-11-20 09:05:07.671458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.883 [2024-11-20 09:05:07.671468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.883 [2024-11-20 09:05:07.671481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.883 [2024-11-20 09:05:07.671490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.883 [2024-11-20 09:05:07.671502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.883 [2024-11-20 09:05:07.671511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.883 [2024-11-20 09:05:07.671523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.883 [2024-11-20 09:05:07.671535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.883 [2024-11-20 09:05:07.671547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.883 [2024-11-20 09:05:07.671557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.883 [2024-11-20 09:05:07.671569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.883 [2024-11-20 09:05:07.671578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.883 [2024-11-20 09:05:07.671589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.883 [2024-11-20 09:05:07.671599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.883 [2024-11-20 09:05:07.671611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.883 [2024-11-20 09:05:07.671620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.883 [2024-11-20 09:05:07.671632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.883 [2024-11-20 09:05:07.671641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.883 [2024-11-20 09:05:07.671652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.883 [2024-11-20 09:05:07.671662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.883 [2024-11-20 09:05:07.671674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.883 [2024-11-20 09:05:07.671685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.883 [2024-11-20 09:05:07.671696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.883 [2024-11-20 09:05:07.671705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.883 [2024-11-20 09:05:07.671716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.883 [2024-11-20 09:05:07.671725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.883 [2024-11-20 09:05:07.671738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.883 [2024-11-20 09:05:07.671748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.883 [2024-11-20 09:05:07.671760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.883 [2024-11-20 09:05:07.671769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.883 [2024-11-20 09:05:07.671780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.883 [2024-11-20 09:05:07.671790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.883 [2024-11-20 09:05:07.671801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.883 [2024-11-20 09:05:07.671812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.883 [2024-11-20 09:05:07.671824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.884 [2024-11-20 09:05:07.671834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.884 [2024-11-20 09:05:07.671846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.884 [2024-11-20 09:05:07.671856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.884 [2024-11-20 09:05:07.671868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.884 [2024-11-20 09:05:07.671877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.884 [2024-11-20 09:05:07.671888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.884 [2024-11-20 09:05:07.671899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.884 [2024-11-20 09:05:07.671911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.884 [2024-11-20 09:05:07.671921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.884 [2024-11-20 09:05:07.671933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.884 [2024-11-20 09:05:07.671942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.884 [2024-11-20 09:05:07.671958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.884 [2024-11-20 09:05:07.671968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.884 [2024-11-20 09:05:07.671980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.884 [2024-11-20 09:05:07.671990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.884 [2024-11-20 09:05:07.672002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.884 [2024-11-20 09:05:07.672011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.884 [2024-11-20 09:05:07.672023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.884 [2024-11-20 09:05:07.672032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.884 [2024-11-20 09:05:07.672045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.884 [2024-11-20 09:05:07.672056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.884 [2024-11-20 09:05:07.672068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.884 [2024-11-20 09:05:07.672077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.884 [2024-11-20 09:05:07.672091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.884 [2024-11-20 09:05:07.672101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.884 [2024-11-20 09:05:07.672113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.884 [2024-11-20 09:05:07.672123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.884 [2024-11-20 09:05:07.672135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.884 [2024-11-20 09:05:07.672145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.884 [2024-11-20 09:05:07.672156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.884 [2024-11-20 09:05:07.672165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.884 [2024-11-20 09:05:07.672176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.884 [2024-11-20 09:05:07.672187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.884 [2024-11-20 09:05:07.672199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.884 [2024-11-20 09:05:07.672209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.884 [2024-11-20 09:05:07.672220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.884 [2024-11-20 09:05:07.672229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.884 [2024-11-20 09:05:07.672241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.884 [2024-11-20 09:05:07.672251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.884 [2024-11-20 09:05:07.672264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.884 [2024-11-20 09:05:07.672274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.884 [2024-11-20 09:05:07.672285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.884 [2024-11-20 09:05:07.672294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.884 [2024-11-20 09:05:07.672305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.884 [2024-11-20 09:05:07.672315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.884 [2024-11-20 09:05:07.672327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.884 [2024-11-20 09:05:07.672337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.884 [2024-11-20 09:05:07.672348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.884 [2024-11-20 09:05:07.672362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.884 [2024-11-20 09:05:07.672374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.884 [2024-11-20 09:05:07.672384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.884 [2024-11-20 09:05:07.672397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.884 [2024-11-20 09:05:07.672407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.884 [2024-11-20 09:05:07.672418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.884 [2024-11-20 09:05:07.672427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.884 [2024-11-20 09:05:07.672439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.885 [2024-11-20 09:05:07.672449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.885 [2024-11-20 09:05:07.672462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.885 [2024-11-20 09:05:07.672472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.885 [2024-11-20 09:05:07.672483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.885 [2024-11-20 09:05:07.672492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.885 [2024-11-20 09:05:07.672503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.885 [2024-11-20 09:05:07.672512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.885 [2024-11-20 09:05:07.672523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.885 [2024-11-20 09:05:07.672533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.885 [2024-11-20 09:05:07.672546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.885 [2024-11-20 09:05:07.672556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.885 [2024-11-20 09:05:07.672566] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1959660 is same with the state(6) to be set 00:20:51.885 [2024-11-20 09:05:07.674231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.885 [2024-11-20 09:05:07.674256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.885 [2024-11-20 09:05:07.674271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.885 [2024-11-20 09:05:07.674282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.885 [2024-11-20 09:05:07.674295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.885 [2024-11-20 09:05:07.674307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.885 [2024-11-20 09:05:07.674319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.885 [2024-11-20 09:05:07.674328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.885 [2024-11-20 09:05:07.674341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.885 [2024-11-20 09:05:07.674351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.885 [2024-11-20 09:05:07.674363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.885 [2024-11-20 09:05:07.674372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.885 [2024-11-20 09:05:07.674384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.885 [2024-11-20 09:05:07.674393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.885 [2024-11-20 09:05:07.674405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.885 [2024-11-20 09:05:07.674416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.885 [2024-11-20 09:05:07.674427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.885 [2024-11-20 09:05:07.674436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.885 [2024-11-20 09:05:07.674448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.885 [2024-11-20 09:05:07.674457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.885 [2024-11-20 09:05:07.674468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.885 [2024-11-20 09:05:07.674479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.885 [2024-11-20 09:05:07.674491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.885 [2024-11-20 09:05:07.674501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.885 [2024-11-20 09:05:07.674512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.885 [2024-11-20 09:05:07.674521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.885 [2024-11-20 09:05:07.674533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.885 [2024-11-20 09:05:07.674542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.885 [2024-11-20 09:05:07.674553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.885 [2024-11-20 09:05:07.674562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.885 [2024-11-20 09:05:07.674577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.885 [2024-11-20 09:05:07.674586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.885 [2024-11-20 09:05:07.674598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.885 [2024-11-20 09:05:07.674607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.885 [2024-11-20 09:05:07.674618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.885 [2024-11-20 09:05:07.674627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.885 [2024-11-20 09:05:07.674638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.885 [2024-11-20 09:05:07.674648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.885 [2024-11-20 09:05:07.674661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.885 [2024-11-20 09:05:07.674670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.885 [2024-11-20 09:05:07.674681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.885 [2024-11-20 09:05:07.674691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.885 [2024-11-20 09:05:07.674702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.885 [2024-11-20 09:05:07.674711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.886 [2024-11-20 09:05:07.674725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.886 [2024-11-20 09:05:07.674735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.886 [2024-11-20 09:05:07.674747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.886 [2024-11-20 09:05:07.674757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.886 [2024-11-20 09:05:07.674770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.886 [2024-11-20 09:05:07.674779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.886 [2024-11-20 09:05:07.674791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.886 [2024-11-20 09:05:07.674802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.886 [2024-11-20 09:05:07.674813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.886 [2024-11-20 09:05:07.674822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.886 [2024-11-20 09:05:07.674834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.886 [2024-11-20 09:05:07.674846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.886 [2024-11-20 09:05:07.674858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.886 [2024-11-20 09:05:07.674867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.886 [2024-11-20 09:05:07.674880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.886 [2024-11-20 09:05:07.674889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.886 [2024-11-20 09:05:07.674900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.886 [2024-11-20 09:05:07.674910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.886 [2024-11-20 09:05:07.674922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.886 [2024-11-20 09:05:07.674931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.886 [2024-11-20 09:05:07.674944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.886 [2024-11-20 09:05:07.674961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.886 [2024-11-20 09:05:07.674972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.886 [2024-11-20 09:05:07.674981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.886 [2024-11-20 09:05:07.674993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.886 [2024-11-20 09:05:07.675002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.886 [2024-11-20 09:05:07.675014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.886 [2024-11-20 09:05:07.675024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.886 [2024-11-20 09:05:07.675035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.886 [2024-11-20 09:05:07.675045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.886 [2024-11-20 09:05:07.675056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.886 [2024-11-20 09:05:07.675066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.886 [2024-11-20 09:05:07.675077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.886 [2024-11-20 09:05:07.675086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.886 [2024-11-20 09:05:07.675098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.886 [2024-11-20 09:05:07.675108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.886 [2024-11-20 09:05:07.675122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.886 [2024-11-20 09:05:07.675132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.886 [2024-11-20 09:05:07.675144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.886 [2024-11-20 09:05:07.675153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.886 [2024-11-20 09:05:07.675165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.886 [2024-11-20 09:05:07.675176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.886 [2024-11-20 09:05:07.675187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.886 [2024-11-20 09:05:07.675196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.886 [2024-11-20 09:05:07.675215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.886 [2024-11-20 09:05:07.675224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.886 [2024-11-20 09:05:07.675236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.887 [2024-11-20 09:05:07.675246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.887 [2024-11-20 09:05:07.675258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.887 [2024-11-20 09:05:07.675266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.887 [2024-11-20 09:05:07.675279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.887 [2024-11-20 09:05:07.675288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.887 [2024-11-20 09:05:07.675299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.887 [2024-11-20 09:05:07.675309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.887 [2024-11-20 09:05:07.675321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.887 [2024-11-20 09:05:07.675330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.887 [2024-11-20 09:05:07.675343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.887 [2024-11-20 09:05:07.675352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.887 [2024-11-20 09:05:07.675363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.887 [2024-11-20 09:05:07.675372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.887 [2024-11-20 09:05:07.675383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.887 [2024-11-20 09:05:07.675395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.887 [2024-11-20 09:05:07.675407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.887 [2024-11-20 09:05:07.675416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.887 [2024-11-20 09:05:07.675428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.887 [2024-11-20 09:05:07.675438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.887 [2024-11-20 09:05:07.675450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.887 [2024-11-20 09:05:07.675460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.887 [2024-11-20 09:05:07.675472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.887 [2024-11-20 09:05:07.675481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.887 [2024-11-20 09:05:07.675492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.887 [2024-11-20 09:05:07.675502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.887 [2024-11-20 09:05:07.675514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.887 [2024-11-20 09:05:07.675523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.887 [2024-11-20 09:05:07.675535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.887 [2024-11-20 09:05:07.675545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.887 [2024-11-20 09:05:07.675557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.887 [2024-11-20 09:05:07.675566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.887 [2024-11-20 09:05:07.675578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.887 [2024-11-20 09:05:07.675588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.887 [2024-11-20 09:05:07.675599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.887 [2024-11-20 09:05:07.675608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.887 [2024-11-20 09:05:07.675620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.887 [2024-11-20 09:05:07.675629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.887 [2024-11-20 09:05:07.675639] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1955a60 is same with the state(6) to be set 00:20:51.887 [2024-11-20 09:05:07.676766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:51.887 [2024-11-20 09:05:07.676784] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:51.887 [2024-11-20 09:05:07.676793] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:51.887 [2024-11-20 09:05:07.676802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:51.887 [2024-11-20 09:05:07.677071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.887 [2024-11-20 09:05:07.677088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18705a0 with addr=10.0.0.2, port=4420 00:20:51.887 [2024-11-20 09:05:07.677097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18705a0 is same with the state(6) to be set 00:20:51.887 [2024-11-20 09:05:07.677298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.887 [2024-11-20 09:05:07.677310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1432d50 with addr=10.0.0.2, port=4420 00:20:51.887 [2024-11-20 09:05:07.677317] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d50 is same with the state(6) to be set 00:20:51.887 [2024-11-20 09:05:07.677326] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:51.887 [2024-11-20 09:05:07.677333] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:51.887 [2024-11-20 09:05:07.677342] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:51.887 [2024-11-20 09:05:07.677350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:51.887 [2024-11-20 09:05:07.677388] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1875b70 (9): Bad file descriptor 00:20:51.887 [2024-11-20 09:05:07.677409] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1876c50 (9): Bad file descriptor 00:20:51.887 [2024-11-20 09:05:07.677426] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1432d50 (9): Bad file descriptor 00:20:51.887 [2024-11-20 09:05:07.677439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18705a0 (9): Bad file descriptor 00:20:51.887 [2024-11-20 09:05:07.677709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.887 [2024-11-20 09:05:07.677726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x142c110 with addr=10.0.0.2, port=4420 00:20:51.887 [2024-11-20 09:05:07.677733] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c110 is same with the state(6) to be set 00:20:51.888 [2024-11-20 09:05:07.677881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.888 [2024-11-20 09:05:07.677892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185d450 with addr=10.0.0.2, port=4420 00:20:51.888 [2024-11-20 09:05:07.677899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x185d450 is same with the state(6) to be set 00:20:51.888 [2024-11-20 09:05:07.677997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.888 [2024-11-20 09:05:07.678008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185e7d0 with addr=10.0.0.2, port=4420 00:20:51.888 [2024-11-20 09:05:07.678016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x185e7d0 is same with the state(6) to be set 00:20:51.888 [2024-11-20 09:05:07.678161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.888 [2024-11-20 09:05:07.678172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1870350 with addr=10.0.0.2, port=4420 00:20:51.888 [2024-11-20 09:05:07.678179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1870350 is same with the state(6) to be set 00:20:51.888 [2024-11-20 09:05:07.678890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.888 [2024-11-20 09:05:07.678907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.888 [2024-11-20 09:05:07.678920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.888 [2024-11-20 09:05:07.678927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.888 [2024-11-20 09:05:07.678936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.888 [2024-11-20 09:05:07.678943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.888 [2024-11-20 09:05:07.678959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.888 [2024-11-20 09:05:07.678966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.888 [2024-11-20 09:05:07.678974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.888 [2024-11-20 09:05:07.678983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.888 [2024-11-20 09:05:07.678991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.888 [2024-11-20 09:05:07.678998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.888 [2024-11-20 09:05:07.679007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.888 [2024-11-20 09:05:07.679015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.888 [2024-11-20 09:05:07.679023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.888 [2024-11-20 09:05:07.679029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.888 [2024-11-20 09:05:07.679038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.888 [2024-11-20 09:05:07.679046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.888 [2024-11-20 09:05:07.679055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.888 [2024-11-20 09:05:07.679062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.888 [2024-11-20 09:05:07.679070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.888 [2024-11-20 09:05:07.679078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.888 [2024-11-20 09:05:07.679087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.888 [2024-11-20 09:05:07.679093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.888 [2024-11-20 09:05:07.679101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.888 [2024-11-20 09:05:07.679109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.888 [2024-11-20 09:05:07.679120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.888 [2024-11-20 09:05:07.679127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.888 [2024-11-20 09:05:07.679136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.888 [2024-11-20 09:05:07.679143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.888 [2024-11-20 09:05:07.679152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.888 [2024-11-20 09:05:07.679158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.888 [2024-11-20 09:05:07.679167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.888 [2024-11-20 09:05:07.679175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.888 [2024-11-20 09:05:07.679183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.888 [2024-11-20 09:05:07.679190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.888 [2024-11-20 09:05:07.679198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.888 [2024-11-20 09:05:07.679206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.888 [2024-11-20 09:05:07.679215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.888 [2024-11-20 09:05:07.679222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.888 [2024-11-20 09:05:07.679230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.888 [2024-11-20 09:05:07.679236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.888 [2024-11-20 09:05:07.679245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.888 [2024-11-20 09:05:07.679252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.888 [2024-11-20 09:05:07.679261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.888 [2024-11-20 09:05:07.679268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.888 [2024-11-20 09:05:07.679277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.888 [2024-11-20 09:05:07.679284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.889 [2024-11-20 09:05:07.679292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.889 [2024-11-20 09:05:07.679300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.889 [2024-11-20 09:05:07.679308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.889 [2024-11-20 09:05:07.679317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.889 [2024-11-20 09:05:07.679325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.889 [2024-11-20 09:05:07.679331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.889 [2024-11-20 09:05:07.679340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.889 [2024-11-20 09:05:07.679347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.889 [2024-11-20 09:05:07.679356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.889 [2024-11-20 09:05:07.679364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.889 [2024-11-20 09:05:07.679372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.889 [2024-11-20 09:05:07.679380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.889 [2024-11-20 09:05:07.679389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.889 [2024-11-20 09:05:07.679396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.889 [2024-11-20 09:05:07.679404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.889 [2024-11-20 09:05:07.679411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.889 [2024-11-20 09:05:07.679420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.889 [2024-11-20 09:05:07.679427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.889 [2024-11-20 09:05:07.679436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.889 [2024-11-20 09:05:07.679442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.889 [2024-11-20 09:05:07.679452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.889 [2024-11-20 09:05:07.679459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.889 [2024-11-20 09:05:07.679468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.889 [2024-11-20 09:05:07.679474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.889 [2024-11-20 09:05:07.679483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.889 [2024-11-20 09:05:07.679491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.889 [2024-11-20 09:05:07.679499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.889 [2024-11-20 09:05:07.679506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.889 [2024-11-20 09:05:07.679516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.889 [2024-11-20 09:05:07.679524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.889 [2024-11-20 09:05:07.679533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.889 [2024-11-20 09:05:07.679540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.889 [2024-11-20 09:05:07.679548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.889 [2024-11-20 09:05:07.679555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.889 [2024-11-20 09:05:07.679565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.889 [2024-11-20 09:05:07.679571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.889 [2024-11-20 09:05:07.679580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.889 [2024-11-20 09:05:07.679586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.889 [2024-11-20 09:05:07.679594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.889 [2024-11-20 09:05:07.679602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.889 [2024-11-20 09:05:07.679611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.889 [2024-11-20 09:05:07.679619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.889 [2024-11-20 09:05:07.679627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.889 [2024-11-20 09:05:07.679633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.889 [2024-11-20 09:05:07.679641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.889 [2024-11-20 09:05:07.679648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.889 [2024-11-20 09:05:07.679658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.889 [2024-11-20 09:05:07.679665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.889 [2024-11-20 09:05:07.679673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.889 [2024-11-20 09:05:07.679680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.889 [2024-11-20 09:05:07.679688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.889 [2024-11-20 09:05:07.679696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.889 [2024-11-20 09:05:07.679705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.889 [2024-11-20 09:05:07.679713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.889 [2024-11-20 09:05:07.679722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.889 [2024-11-20 09:05:07.679728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.889 [2024-11-20 09:05:07.679737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.890 [2024-11-20 09:05:07.679744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.890 [2024-11-20 09:05:07.679753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.890 [2024-11-20 09:05:07.679760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.890 [2024-11-20 09:05:07.679768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.890 [2024-11-20 09:05:07.679775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.890 [2024-11-20 09:05:07.679784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.890 [2024-11-20 09:05:07.679791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.890 [2024-11-20 09:05:07.679799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.890 [2024-11-20 09:05:07.679806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.890 [2024-11-20 09:05:07.679816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.890 [2024-11-20 09:05:07.679823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.890 [2024-11-20 09:05:07.679832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.890 [2024-11-20 09:05:07.679838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.890 [2024-11-20 09:05:07.679846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.890 [2024-11-20 09:05:07.679854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.890 [2024-11-20 09:05:07.679863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.890 [2024-11-20 09:05:07.679870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.890 [2024-11-20 09:05:07.679878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.890 [2024-11-20 09:05:07.679885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.890 [2024-11-20 09:05:07.679895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.890 [2024-11-20 09:05:07.679902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.890 [2024-11-20 09:05:07.679911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:51.890 [2024-11-20 09:05:07.679918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:51.890 [2024-11-20 09:05:07.679926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x20a8ff0 is same with the state(6) to be set 00:20:51.890 [2024-11-20 09:05:07.681139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:20:51.890 task offset: 16640 on job bdev=Nvme2n1 fails 00:20:51.890 00:20:51.890 Latency(us) 00:20:51.890 [2024-11-20T08:05:07.931Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:51.890 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.890 Job: Nvme1n1 ended in about 0.64 seconds with error 00:20:51.890 Verification LBA range: start 0x0 length 0x400 00:20:51.890 Nvme1n1 : 0.64 210.00 13.13 100.30 0.00 203138.00 9858.89 218833.25 00:20:51.890 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.890 Job: Nvme2n1 ended in about 0.62 seconds with error 00:20:51.890 Verification LBA range: start 0x0 length 0x400 00:20:51.890 Nvme2n1 : 0.62 206.28 12.89 103.14 0.00 198293.67 2550.21 224304.08 00:20:51.890 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.890 Job: Nvme3n1 ended in about 0.65 seconds with error 00:20:51.890 Verification LBA range: start 0x0 length 0x400 00:20:51.890 Nvme3n1 : 0.65 197.04 12.32 98.52 0.00 202669.49 25644.52 207891.59 00:20:51.890 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.890 Job: Nvme4n1 ended in about 0.65 seconds with error 00:20:51.890 Verification LBA range: start 0x0 length 0x400 00:20:51.890 Nvme4n1 : 0.65 202.35 12.65 98.11 0.00 194290.98 25986.45 206979.78 00:20:51.890 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.890 Job: Nvme5n1 ended in about 0.66 seconds with error 00:20:51.890 Verification LBA range: start 0x0 length 0x400 00:20:51.890 Nvme5n1 : 0.66 195.39 12.21 97.70 0.00 193907.01 19489.84 222480.47 00:20:51.890 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.890 Job: Nvme6n1 ended in about 0.66 seconds with error 00:20:51.890 Verification LBA range: start 0x0 length 0x400 00:20:51.890 Nvme6n1 : 0.66 199.29 12.46 96.63 0.00 187106.15 16754.42 216097.84 00:20:51.890 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.890 Job: Nvme7n1 ended in about 0.65 seconds with error 00:20:51.890 Verification LBA range: start 0x0 length 0x400 00:20:51.890 Nvme7n1 : 0.65 198.05 12.38 99.03 0.00 180361.72 15728.64 211538.81 00:20:51.890 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.890 Verification LBA range: start 0x0 length 0x400 00:20:51.890 Nvme8n1 : 0.62 205.03 12.81 0.00 0.00 251725.47 13164.19 225215.89 00:20:51.890 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.890 Verification LBA range: start 0x0 length 0x400 00:20:51.890 Nvme9n1 : 0.63 203.55 12.72 0.00 0.00 246246.18 34648.60 227951.30 00:20:51.890 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:51.890 Job: Nvme10n1 ended in about 0.66 seconds with error 00:20:51.890 Verification LBA range: start 0x0 length 0x400 00:20:51.890 Nvme10n1 : 0.66 97.24 6.08 97.24 0.00 253125.68 18236.10 246187.41 00:20:51.890 [2024-11-20T08:05:07.931Z] =================================================================================================================== 00:20:51.890 [2024-11-20T08:05:07.931Z] Total : 1914.22 119.64 790.65 0.00 206636.13 2550.21 246187.41 00:20:51.890 [2024-11-20 09:05:07.712443] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:51.890 [2024-11-20 09:05:07.712494] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:20:51.890 [2024-11-20 09:05:07.712555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x142c110 (9): Bad file descriptor 00:20:51.890 [2024-11-20 09:05:07.712569] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x185d450 (9): Bad file descriptor 00:20:51.890 [2024-11-20 09:05:07.712579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x185e7d0 (9): Bad file descriptor 00:20:51.891 [2024-11-20 09:05:07.712589] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1870350 (9): Bad file descriptor 00:20:51.891 [2024-11-20 09:05:07.712598] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:51.891 [2024-11-20 09:05:07.712605] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:51.891 [2024-11-20 09:05:07.712613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:51.891 [2024-11-20 09:05:07.712623] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:51.891 [2024-11-20 09:05:07.712631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:51.891 [2024-11-20 09:05:07.712637] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:51.891 [2024-11-20 09:05:07.712645] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:51.891 [2024-11-20 09:05:07.712652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:51.891 [2024-11-20 09:05:07.712979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.891 [2024-11-20 09:05:07.712999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14331b0 with addr=10.0.0.2, port=4420 00:20:51.891 [2024-11-20 09:05:07.713009] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14331b0 is same with the state(6) to be set 00:20:51.891 [2024-11-20 09:05:07.713233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.891 [2024-11-20 09:05:07.713245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1347610 with addr=10.0.0.2, port=4420 00:20:51.891 [2024-11-20 09:05:07.713252] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1347610 is same with the state(6) to be set 00:20:51.891 [2024-11-20 09:05:07.713259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:51.891 [2024-11-20 09:05:07.713266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:51.891 [2024-11-20 09:05:07.713274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:51.891 [2024-11-20 09:05:07.713282] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:51.891 [2024-11-20 09:05:07.713289] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:51.891 [2024-11-20 09:05:07.713295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:51.891 [2024-11-20 09:05:07.713302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:51.891 [2024-11-20 09:05:07.713308] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:51.891 [2024-11-20 09:05:07.713314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:51.891 [2024-11-20 09:05:07.713320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:51.891 [2024-11-20 09:05:07.713327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:51.891 [2024-11-20 09:05:07.713336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:51.891 [2024-11-20 09:05:07.713344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:51.891 [2024-11-20 09:05:07.713350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:51.891 [2024-11-20 09:05:07.713357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:51.891 [2024-11-20 09:05:07.713363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:51.891 [2024-11-20 09:05:07.713744] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x14331b0 (9): Bad file descriptor 00:20:51.891 [2024-11-20 09:05:07.713759] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1347610 (9): Bad file descriptor 00:20:51.891 [2024-11-20 09:05:07.713797] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:20:51.891 [2024-11-20 09:05:07.713809] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:20:51.891 [2024-11-20 09:05:07.713817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:20:51.891 [2024-11-20 09:05:07.713825] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:20:51.891 [2024-11-20 09:05:07.713835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:20:51.891 [2024-11-20 09:05:07.713843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:20:51.891 [2024-11-20 09:05:07.713880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:20:51.891 [2024-11-20 09:05:07.713888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:20:51.891 [2024-11-20 09:05:07.713896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:20:51.891 [2024-11-20 09:05:07.713902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:20:51.891 [2024-11-20 09:05:07.713909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:20:51.891 [2024-11-20 09:05:07.713915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:20:51.891 [2024-11-20 09:05:07.713922] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:20:51.891 [2024-11-20 09:05:07.713929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:20:51.891 [2024-11-20 09:05:07.713960] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:20:51.891 [2024-11-20 09:05:07.713969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:20:51.891 [2024-11-20 09:05:07.714138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.891 [2024-11-20 09:05:07.714151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1875b70 with addr=10.0.0.2, port=4420 00:20:51.891 [2024-11-20 09:05:07.714159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1875b70 is same with the state(6) to be set 00:20:51.891 [2024-11-20 09:05:07.714409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.891 [2024-11-20 09:05:07.714420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1876c50 with addr=10.0.0.2, port=4420 00:20:51.891 [2024-11-20 09:05:07.714427] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1876c50 is same with the state(6) to be set 00:20:51.891 [2024-11-20 09:05:07.714632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.891 [2024-11-20 09:05:07.714642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1432d50 with addr=10.0.0.2, port=4420 00:20:51.891 [2024-11-20 09:05:07.714650] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1432d50 is same with the state(6) to be set 00:20:51.891 [2024-11-20 09:05:07.714735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.891 [2024-11-20 09:05:07.714746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x18705a0 with addr=10.0.0.2, port=4420 00:20:51.891 [2024-11-20 09:05:07.714753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18705a0 is same with the state(6) to be set 00:20:51.891 [2024-11-20 09:05:07.714886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.891 [2024-11-20 09:05:07.714897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1870350 with addr=10.0.0.2, port=4420 00:20:51.891 [2024-11-20 09:05:07.714904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1870350 is same with the state(6) to be set 00:20:51.892 [2024-11-20 09:05:07.715033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.892 [2024-11-20 09:05:07.715045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185e7d0 with addr=10.0.0.2, port=4420 00:20:51.892 [2024-11-20 09:05:07.715052] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x185e7d0 is same with the state(6) to be set 00:20:51.892 [2024-11-20 09:05:07.715221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.892 [2024-11-20 09:05:07.715232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x185d450 with addr=10.0.0.2, port=4420 00:20:51.892 [2024-11-20 09:05:07.715239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x185d450 is same with the state(6) to be set 00:20:51.892 [2024-11-20 09:05:07.715296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:51.892 [2024-11-20 09:05:07.715305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x142c110 with addr=10.0.0.2, port=4420 00:20:51.892 [2024-11-20 09:05:07.715312] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x142c110 is same with the state(6) to be set 00:20:51.892 [2024-11-20 09:05:07.715322] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1875b70 (9): Bad file descriptor 00:20:51.892 [2024-11-20 09:05:07.715332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1876c50 (9): Bad file descriptor 00:20:51.892 [2024-11-20 09:05:07.715342] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1432d50 (9): Bad file descriptor 00:20:51.892 [2024-11-20 09:05:07.715350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18705a0 (9): Bad file descriptor 00:20:51.892 [2024-11-20 09:05:07.715359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1870350 (9): Bad file descriptor 00:20:51.892 [2024-11-20 09:05:07.715367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x185e7d0 (9): Bad file descriptor 00:20:51.892 [2024-11-20 09:05:07.715392] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x185d450 (9): Bad file descriptor 00:20:51.892 [2024-11-20 09:05:07.715402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x142c110 (9): Bad file descriptor 00:20:51.892 [2024-11-20 09:05:07.715410] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:20:51.892 [2024-11-20 09:05:07.715416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:20:51.892 [2024-11-20 09:05:07.715422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:20:51.892 [2024-11-20 09:05:07.715429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:20:51.892 [2024-11-20 09:05:07.715439] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:20:51.892 [2024-11-20 09:05:07.715446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:20:51.892 [2024-11-20 09:05:07.715453] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:20:51.892 [2024-11-20 09:05:07.715460] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:20:51.892 [2024-11-20 09:05:07.715467] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:20:51.892 [2024-11-20 09:05:07.715472] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:20:51.892 [2024-11-20 09:05:07.715478] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:20:51.892 [2024-11-20 09:05:07.715484] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:20:51.892 [2024-11-20 09:05:07.715491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:20:51.892 [2024-11-20 09:05:07.715497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:20:51.892 [2024-11-20 09:05:07.715505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:20:51.892 [2024-11-20 09:05:07.715511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:20:51.892 [2024-11-20 09:05:07.715518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:20:51.892 [2024-11-20 09:05:07.715525] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:20:51.892 [2024-11-20 09:05:07.715531] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:20:51.892 [2024-11-20 09:05:07.715537] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:20:51.892 [2024-11-20 09:05:07.715543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:20:51.892 [2024-11-20 09:05:07.715549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:20:51.892 [2024-11-20 09:05:07.715556] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:20:51.892 [2024-11-20 09:05:07.715562] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:20:51.892 [2024-11-20 09:05:07.715585] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:20:51.892 [2024-11-20 09:05:07.715592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:20:51.892 [2024-11-20 09:05:07.715598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:20:51.892 [2024-11-20 09:05:07.715604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:20:51.892 [2024-11-20 09:05:07.715611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:20:51.892 [2024-11-20 09:05:07.715616] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:20:51.892 [2024-11-20 09:05:07.715623] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:20:51.892 [2024-11-20 09:05:07.715631] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:20:52.152 09:05:08 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 2395421 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2395421 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 2395421 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # nvmfcleanup 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@99 -- # sync 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # set +e 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # for i in {1..20} 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:20:53.091 rmmod nvme_tcp 00:20:53.091 rmmod nvme_fabrics 00:20:53.091 rmmod nvme_keyring 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # set -e 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # return 0 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # '[' -n 2395136 ']' 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@337 -- # killprocess 2395136 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 2395136 ']' 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 2395136 00:20:53.091 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2395136) - No such process 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2395136 is not found' 00:20:53.091 Process with pid 2395136 is not found 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # nvmf_fini 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@264 -- # local dev 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@267 -- # remove_target_ns 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:53.091 09:05:09 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@268 -- # delete_main_bridge 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@130 -- # return 0 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@41 -- # _dev=0 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@41 -- # dev_map=() 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@284 -- # iptr 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@542 -- # iptables-save 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@542 -- # iptables-restore 00:20:55.645 00:20:55.645 real 0m7.900s 00:20:55.645 user 0m19.185s 00:20:55.645 sys 0m1.351s 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:55.645 ************************************ 00:20:55.645 END TEST nvmf_shutdown_tc3 00:20:55.645 ************************************ 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:55.645 ************************************ 00:20:55.645 START TEST nvmf_shutdown_tc4 00:20:55.645 ************************************ 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@296 -- # prepare_net_devs 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@260 -- # remove_target_ns 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # xtrace_disable 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@131 -- # pci_devs=() 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@131 -- # local -a pci_devs 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@133 -- # pci_drivers=() 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@135 -- # net_devs=() 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@135 -- # local -ga net_devs 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@136 -- # e810=() 00:20:55.645 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@136 -- # local -ga e810 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@137 -- # x722=() 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@137 -- # local -ga x722 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@138 -- # mlx=() 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@138 -- # local -ga mlx 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:55.646 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:55.646 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:55.646 Found net devices under 0000:86:00.0: cvl_0_0 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:55.646 Found net devices under 0000:86:00.1: cvl_0_1 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # is_hw=yes 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@257 -- # create_target_ns 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@28 -- # local -g _dev 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@44 -- # ips=() 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:20:55.646 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@11 -- # local val=167772161 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:20:55.647 10.0.0.1 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@11 -- # local val=167772162 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:20:55.647 10.0.0.2 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@38 -- # ping_ips 1 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:20:55.647 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:55.647 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.432 ms 00:20:55.647 00:20:55.647 --- 10.0.0.1 ping statistics --- 00:20:55.647 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.647 rtt min/avg/max/mdev = 0.432/0.432/0.432/0.000 ms 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@107 -- # local dev=target0 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:20:55.647 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:20:55.648 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:55.648 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:20:55.648 00:20:55.648 --- 10.0.0.2 ping statistics --- 00:20:55.648 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:55.648 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # (( pair++ )) 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@270 -- # return 0 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@107 -- # local dev=initiator1 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # return 1 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # dev= 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@169 -- # return 0 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@107 -- # local dev=target0 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # get_net_dev target1 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@107 -- # local dev=target1 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # return 1 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # dev= 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@169 -- # return 0 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:20:55.648 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:20:55.908 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:20:55.908 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:55.908 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:55.908 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:55.909 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # nvmfpid=2396693 00:20:55.909 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@329 -- # waitforlisten 2396693 00:20:55.909 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:20:55.909 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 2396693 ']' 00:20:55.909 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:55.909 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:55.909 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:55.909 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:55.909 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:55.909 09:05:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:55.909 [2024-11-20 09:05:11.751761] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:20:55.909 [2024-11-20 09:05:11.751807] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:55.909 [2024-11-20 09:05:11.831522] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:55.909 [2024-11-20 09:05:11.873662] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:55.909 [2024-11-20 09:05:11.873698] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:55.909 [2024-11-20 09:05:11.873705] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:55.909 [2024-11-20 09:05:11.873714] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:55.909 [2024-11-20 09:05:11.873719] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:55.909 [2024-11-20 09:05:11.875352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:55.909 [2024-11-20 09:05:11.875465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:55.909 [2024-11-20 09:05:11.875570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:55.909 [2024-11-20 09:05:11.875572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:56.846 [2024-11-20 09:05:12.641563] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:20:56.846 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:56.847 09:05:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:56.847 Malloc1 00:20:56.847 [2024-11-20 09:05:12.755482] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:56.847 Malloc2 00:20:56.847 Malloc3 00:20:56.847 Malloc4 00:20:57.105 Malloc5 00:20:57.105 Malloc6 00:20:57.105 Malloc7 00:20:57.105 Malloc8 00:20:57.105 Malloc9 00:20:57.105 Malloc10 00:20:57.364 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:57.364 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:20:57.364 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:57.364 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:20:57.364 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=2396973 00:20:57.364 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:20:57.364 09:05:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:20:57.364 [2024-11-20 09:05:13.265730] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:02.646 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:02.646 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 2396693 00:21:02.646 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2396693 ']' 00:21:02.646 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2396693 00:21:02.646 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:21:02.646 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:02.646 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2396693 00:21:02.646 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:02.646 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:02.646 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2396693' 00:21:02.646 killing process with pid 2396693 00:21:02.646 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 2396693 00:21:02.646 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 2396693 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 starting I/O failed: -6 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 starting I/O failed: -6 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 starting I/O failed: -6 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 starting I/O failed: -6 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 starting I/O failed: -6 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 starting I/O failed: -6 00:21:02.646 [2024-11-20 09:05:18.263605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d72c00 is same with the state(6) to be set 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 [2024-11-20 09:05:18.263656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d72c00 is same with the state(6) to be set 00:21:02.646 [2024-11-20 09:05:18.263664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d72c00 is same with the state(6) to be set 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 [2024-11-20 09:05:18.263671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d72c00 is same with the state(6) to be set 00:21:02.646 [2024-11-20 09:05:18.263678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d72c00 is same with the state(6) to be set 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 [2024-11-20 09:05:18.263685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d72c00 is same with the state(6) to be set 00:21:02.646 starting I/O failed: -6 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 starting I/O failed: -6 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 starting I/O failed: -6 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 starting I/O failed: -6 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 [2024-11-20 09:05:18.263938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.646 starting I/O failed: -6 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 starting I/O failed: -6 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 starting I/O failed: -6 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 starting I/O failed: -6 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 starting I/O failed: -6 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 starting I/O failed: -6 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 starting I/O failed: -6 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 starting I/O failed: -6 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 starting I/O failed: -6 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 starting I/O failed: -6 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 [2024-11-20 09:05:18.264436] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5d590 is same with the state(6) to be set 00:21:02.646 Write completed with error (sct=0, sc=8) 00:21:02.646 starting I/O failed: -6 00:21:02.646 [2024-11-20 09:05:18.264463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5d590 is same with the state(6) to be set 00:21:02.646 [2024-11-20 09:05:18.264471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5d590 is same with the state(6) to be set 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 [2024-11-20 09:05:18.264478] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5d590 is same with the state(6) to be set 00:21:02.647 starting I/O failed: -6 00:21:02.647 [2024-11-20 09:05:18.264485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5d590 is same with the state(6) to be set 00:21:02.647 [2024-11-20 09:05:18.264492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5d590 is same with the state(6) to be set 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 [2024-11-20 09:05:18.264809] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 [2024-11-20 09:05:18.264962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5d910 is same with the state(6) to be set 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 [2024-11-20 09:05:18.264988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5d910 is same with the state(6) to be set 00:21:02.647 [2024-11-20 09:05:18.264997] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5d910 is same with the state(6) to be set 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 [2024-11-20 09:05:18.265004] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5d910 is same with starting I/O failed: -6 00:21:02.647 the state(6) to be set 00:21:02.647 [2024-11-20 09:05:18.265012] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5d910 is same with the state(6) to be set 00:21:02.647 [2024-11-20 09:05:18.265019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5d910 is same with the state(6) to be set 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 [2024-11-20 09:05:18.265025] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5d910 is same with the state(6) to be set 00:21:02.647 starting I/O failed: -6 00:21:02.647 [2024-11-20 09:05:18.265032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5d910 is same with the state(6) to be set 00:21:02.647 [2024-11-20 09:05:18.265039] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d5d910 is same with the state(6) to be set 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 [2024-11-20 09:05:18.265338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d72730 is same with the state(6) to be set 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 [2024-11-20 09:05:18.265361] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d72730 is same with the state(6) to be set 00:21:02.647 [2024-11-20 09:05:18.265370] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d72730 is same with the state(6) to be set 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 [2024-11-20 09:05:18.265376] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d72730 is same with starting I/O failed: -6 00:21:02.647 the state(6) to be set 00:21:02.647 [2024-11-20 09:05:18.265384] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d72730 is same with the state(6) to be set 00:21:02.647 [2024-11-20 09:05:18.265390] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d72730 is same with the state(6) to be set 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 [2024-11-20 09:05:18.265396] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d72730 is same with the state(6) to be set 00:21:02.647 starting I/O failed: -6 00:21:02.647 [2024-11-20 09:05:18.265403] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d72730 is same with the state(6) to be set 00:21:02.647 [2024-11-20 09:05:18.265410] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d72730 is same with the state(6) to be set 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 [2024-11-20 09:05:18.265415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d72730 is same with the state(6) to be set 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 [2024-11-20 09:05:18.265863] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.647 Write completed with error (sct=0, sc=8) 00:21:02.647 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 [2024-11-20 09:05:18.267457] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:02.648 NVMe io qpair process completion error 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 [2024-11-20 09:05:18.273530] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.648 starting I/O failed: -6 00:21:02.648 starting I/O failed: -6 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 [2024-11-20 09:05:18.274308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd8a20 is same with the state(6) to be set 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 [2024-11-20 09:05:18.274338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd8a20 is same with the state(6) to be set 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 [2024-11-20 09:05:18.274346] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd8a20 is same with the state(6) to be set 00:21:02.648 [2024-11-20 09:05:18.274353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd8a20 is same with Write completed with error (sct=0, sc=8) 00:21:02.648 the state(6) to be set 00:21:02.648 [2024-11-20 09:05:18.274365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd8a20 is same with the state(6) to be set 00:21:02.648 [2024-11-20 09:05:18.274371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd8a20 is same with the state(6) to be set 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 [2024-11-20 09:05:18.274491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 starting I/O failed: -6 00:21:02.648 Write completed with error (sct=0, sc=8) 00:21:02.648 [2024-11-20 09:05:18.274687] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd8ef0 is same with the state(6) to be set 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 [2024-11-20 09:05:18.274708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd8ef0 is same with the state(6) to be set 00:21:02.649 [2024-11-20 09:05:18.274716] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd8ef0 is same with the state(6) to be set 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 [2024-11-20 09:05:18.274723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd8ef0 is same with the state(6) to be set 00:21:02.649 starting I/O failed: -6 00:21:02.649 [2024-11-20 09:05:18.274729] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd8ef0 is same with the state(6) to be set 00:21:02.649 [2024-11-20 09:05:18.274736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd8ef0 is same with the state(6) to be set 00:21:02.649 [2024-11-20 09:05:18.274742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd8ef0 is same with Write completed with error (sct=0, sc=8) 00:21:02.649 the state(6) to be set 00:21:02.649 starting I/O failed: -6 00:21:02.649 [2024-11-20 09:05:18.274749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd8ef0 is same with the state(6) to be set 00:21:02.649 [2024-11-20 09:05:18.274756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd8ef0 is same with the state(6) to be set 00:21:02.649 [2024-11-20 09:05:18.274762] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd8ef0 is same with the state(6) to be set 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 [2024-11-20 09:05:18.275140] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd93c0 is same with the state(6) to be set 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 [2024-11-20 09:05:18.275161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd93c0 is same with the state(6) to be set 00:21:02.649 [2024-11-20 09:05:18.275168] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd93c0 is same with the state(6) to be set 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 [2024-11-20 09:05:18.275175] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd93c0 is same with the state(6) to be set 00:21:02.649 starting I/O failed: -6 00:21:02.649 [2024-11-20 09:05:18.275182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd93c0 is same with the state(6) to be set 00:21:02.649 [2024-11-20 09:05:18.275188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd93c0 is same with the state(6) to be set 00:21:02.649 [2024-11-20 09:05:18.275195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd93c0 is same with Write completed with error (sct=0, sc=8) 00:21:02.649 the state(6) to be set 00:21:02.649 starting I/O failed: -6 00:21:02.649 [2024-11-20 09:05:18.275203] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd93c0 is same with the state(6) to be set 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 [2024-11-20 09:05:18.275471] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd8550 is same with the state(6) to be set 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 [2024-11-20 09:05:18.275492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd8550 is same with the state(6) to be set 00:21:02.649 starting I/O failed: -6 00:21:02.649 [2024-11-20 09:05:18.275501] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd8550 is same with the state(6) to be set 00:21:02.649 [2024-11-20 09:05:18.275508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd8550 is same with the state(6) to be set 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 [2024-11-20 09:05:18.275514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd8550 is same with the state(6) to be set 00:21:02.649 [2024-11-20 09:05:18.275521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd8550 is same with the state(6) to be set 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 [2024-11-20 09:05:18.275555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 [2024-11-20 09:05:18.275807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9d80 is same with the state(6) to be set 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 [2024-11-20 09:05:18.275819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9d80 is same with starting I/O failed: -6 00:21:02.649 the state(6) to be set 00:21:02.649 [2024-11-20 09:05:18.275826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9d80 is same with the state(6) to be set 00:21:02.649 [2024-11-20 09:05:18.275832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9d80 is same with the state(6) to be set 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 [2024-11-20 09:05:18.275839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9d80 is same with the state(6) to be set 00:21:02.649 starting I/O failed: -6 00:21:02.649 [2024-11-20 09:05:18.275846] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9d80 is same with the state(6) to be set 00:21:02.649 [2024-11-20 09:05:18.275853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9d80 is same with the state(6) to be set 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 [2024-11-20 09:05:18.275859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9d80 is same with the state(6) to be set 00:21:02.649 starting I/O failed: -6 00:21:02.649 [2024-11-20 09:05:18.275866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd9d80 is same with the state(6) to be set 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 starting I/O failed: -6 00:21:02.649 [2024-11-20 09:05:18.276136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dda250 is same with the state(6) to be set 00:21:02.649 Write completed with error (sct=0, sc=8) 00:21:02.649 [2024-11-20 09:05:18.276147] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dda250 is same with the state(6) to be set 00:21:02.650 starting I/O failed: -6 00:21:02.650 [2024-11-20 09:05:18.276154] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dda250 is same with the state(6) to be set 00:21:02.650 [2024-11-20 09:05:18.276160] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dda250 is same with the state(6) to be set 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 [2024-11-20 09:05:18.276167] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dda250 is same with the state(6) to be set 00:21:02.650 starting I/O failed: -6 00:21:02.650 [2024-11-20 09:05:18.276173] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dda250 is same with the state(6) to be set 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 [2024-11-20 09:05:18.276483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dda740 is same with the state(6) to be set 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 [2024-11-20 09:05:18.276495] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dda740 is same with the state(6) to be set 00:21:02.650 starting I/O failed: -6 00:21:02.650 [2024-11-20 09:05:18.276502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dda740 is same with the state(6) to be set 00:21:02.650 [2024-11-20 09:05:18.276508] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dda740 is same with the state(6) to be set 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 [2024-11-20 09:05:18.276514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dda740 is same with the state(6) to be set 00:21:02.650 starting I/O failed: -6 00:21:02.650 [2024-11-20 09:05:18.276521] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dda740 is same with the state(6) to be set 00:21:02.650 [2024-11-20 09:05:18.276527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dda740 is same with the state(6) to be set 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 [2024-11-20 09:05:18.276905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd98b0 is same with the state(6) to be set 00:21:02.650 [2024-11-20 09:05:18.276916] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd98b0 is same with the state(6) to be set 00:21:02.650 [2024-11-20 09:05:18.276923] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd98b0 is same with the state(6) to be set 00:21:02.650 [2024-11-20 09:05:18.276929] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd98b0 is same with the state(6) to be set 00:21:02.650 [2024-11-20 09:05:18.276935] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd98b0 is same with the state(6) to be set 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 [2024-11-20 09:05:18.277112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:02.650 NVMe io qpair process completion error 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 [2024-11-20 09:05:18.278104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.650 starting I/O failed: -6 00:21:02.650 Write completed with error (sct=0, sc=8) 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 [2024-11-20 09:05:18.279013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 [2024-11-20 09:05:18.280015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.651 Write completed with error (sct=0, sc=8) 00:21:02.651 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 [2024-11-20 09:05:18.282006] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.652 NVMe io qpair process completion error 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 [2024-11-20 09:05:18.282922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 [2024-11-20 09:05:18.283833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 Write completed with error (sct=0, sc=8) 00:21:02.652 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 [2024-11-20 09:05:18.284905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 [2024-11-20 09:05:18.286853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:02.653 NVMe io qpair process completion error 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.653 [2024-11-20 09:05:18.287920] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:02.653 starting I/O failed: -6 00:21:02.653 Write completed with error (sct=0, sc=8) 00:21:02.653 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 [2024-11-20 09:05:18.288727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 [2024-11-20 09:05:18.289758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.654 Write completed with error (sct=0, sc=8) 00:21:02.654 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 [2024-11-20 09:05:18.299118] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:02.655 NVMe io qpair process completion error 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 [2024-11-20 09:05:18.300243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 [2024-11-20 09:05:18.301056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.655 Write completed with error (sct=0, sc=8) 00:21:02.655 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 [2024-11-20 09:05:18.302136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 [2024-11-20 09:05:18.303984] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:02.656 NVMe io qpair process completion error 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 Write completed with error (sct=0, sc=8) 00:21:02.656 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 [2024-11-20 09:05:18.305013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:02.657 starting I/O failed: -6 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 [2024-11-20 09:05:18.305899] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 [2024-11-20 09:05:18.306986] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.657 Write completed with error (sct=0, sc=8) 00:21:02.657 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 [2024-11-20 09:05:18.313022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.658 NVMe io qpair process completion error 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 [2024-11-20 09:05:18.314017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 [2024-11-20 09:05:18.314930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 Write completed with error (sct=0, sc=8) 00:21:02.658 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 [2024-11-20 09:05:18.315961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 [2024-11-20 09:05:18.320026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:02.659 NVMe io qpair process completion error 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 starting I/O failed: -6 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.659 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 [2024-11-20 09:05:18.321063] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.660 starting I/O failed: -6 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 [2024-11-20 09:05:18.321976] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 [2024-11-20 09:05:18.323056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.660 starting I/O failed: -6 00:21:02.660 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 [2024-11-20 09:05:18.324895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:02.661 NVMe io qpair process completion error 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 [2024-11-20 09:05:18.325873] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 [2024-11-20 09:05:18.326787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:02.661 Write completed with error (sct=0, sc=8) 00:21:02.661 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 [2024-11-20 09:05:18.327845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 Write completed with error (sct=0, sc=8) 00:21:02.662 starting I/O failed: -6 00:21:02.662 [2024-11-20 09:05:18.335094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:02.662 NVMe io qpair process completion error 00:21:02.662 Initializing NVMe Controllers 00:21:02.662 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:02.662 Controller IO queue size 128, less than required. 00:21:02.662 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:02.662 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:02.662 Controller IO queue size 128, less than required. 00:21:02.662 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:02.662 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:02.662 Controller IO queue size 128, less than required. 00:21:02.663 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:02.663 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:02.663 Controller IO queue size 128, less than required. 00:21:02.663 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:02.663 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:02.663 Controller IO queue size 128, less than required. 00:21:02.663 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:02.663 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:02.663 Controller IO queue size 128, less than required. 00:21:02.663 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:02.663 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:02.663 Controller IO queue size 128, less than required. 00:21:02.663 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:02.663 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:02.663 Controller IO queue size 128, less than required. 00:21:02.663 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:02.663 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:02.663 Controller IO queue size 128, less than required. 00:21:02.663 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:02.663 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:02.663 Controller IO queue size 128, less than required. 00:21:02.663 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:02.663 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:02.663 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:02.663 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:02.663 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:02.663 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:02.663 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:02.663 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:02.663 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:02.663 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:02.663 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:02.663 Initialization complete. Launching workers. 00:21:02.663 ======================================================== 00:21:02.663 Latency(us) 00:21:02.663 Device Information : IOPS MiB/s Average min max 00:21:02.663 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2204.07 94.71 58081.59 687.26 114541.64 00:21:02.663 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2137.47 91.84 59956.40 713.99 123908.99 00:21:02.663 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2119.98 91.09 59768.61 894.61 110968.17 00:21:02.663 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2113.02 90.79 60647.00 654.57 125934.53 00:21:02.663 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2128.83 91.47 60211.46 666.80 109405.51 00:21:02.663 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2118.92 91.05 59781.01 655.68 108422.95 00:21:02.663 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2105.86 90.49 60161.84 957.32 109140.71 00:21:02.663 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2077.40 89.26 61001.30 909.96 108829.59 00:21:02.663 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2195.00 94.32 57748.29 690.75 107262.53 00:21:02.663 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2207.23 94.84 57531.21 757.40 114877.91 00:21:02.663 ======================================================== 00:21:02.663 Total : 21407.77 919.87 59466.10 654.57 125934.53 00:21:02.663 00:21:02.663 [2024-11-20 09:05:18.341752] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca7560 is same with the state(6) to be set 00:21:02.663 [2024-11-20 09:05:18.341809] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca8410 is same with the state(6) to be set 00:21:02.663 [2024-11-20 09:05:18.341847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca9900 is same with the state(6) to be set 00:21:02.663 [2024-11-20 09:05:18.341884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca8740 is same with the state(6) to be set 00:21:02.663 [2024-11-20 09:05:18.341920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca8a70 is same with the state(6) to be set 00:21:02.663 [2024-11-20 09:05:18.341966] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca9720 is same with the state(6) to be set 00:21:02.663 [2024-11-20 09:05:18.342002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca7bc0 is same with the state(6) to be set 00:21:02.663 [2024-11-20 09:05:18.342038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca9ae0 is same with the state(6) to be set 00:21:02.663 [2024-11-20 09:05:18.342073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca7890 is same with the state(6) to be set 00:21:02.663 [2024-11-20 09:05:18.342108] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca7ef0 is same with the state(6) to be set 00:21:02.663 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:02.663 09:05:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 2396973 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2396973 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 2396973 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@335 -- # nvmfcleanup 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@99 -- # sync 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@102 -- # set +e 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@103 -- # for i in {1..20} 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:21:04.043 rmmod nvme_tcp 00:21:04.043 rmmod nvme_fabrics 00:21:04.043 rmmod nvme_keyring 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # set -e 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # return 0 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # '[' -n 2396693 ']' 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@337 -- # killprocess 2396693 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 2396693 ']' 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 2396693 00:21:04.043 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2396693) - No such process 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 2396693 is not found' 00:21:04.043 Process with pid 2396693 is not found 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@342 -- # nvmf_fini 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@264 -- # local dev 00:21:04.043 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@267 -- # remove_target_ns 00:21:04.044 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:04.044 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:04.044 09:05:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:05.951 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@268 -- # delete_main_bridge 00:21:05.951 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:05.951 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@130 -- # return 0 00:21:05.951 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:05.951 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:21:05.951 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:21:05.951 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:21:05.951 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:21:05.951 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:21:05.951 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:21:05.951 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:21:05.951 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:05.951 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:21:05.951 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:21:05.951 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:21:05.951 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:21:05.951 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:21:05.951 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:21:05.951 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:21:05.951 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:21:05.951 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@41 -- # _dev=0 00:21:05.951 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@41 -- # dev_map=() 00:21:05.951 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@284 -- # iptr 00:21:05.952 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@542 -- # iptables-save 00:21:05.952 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:21:05.952 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@542 -- # iptables-restore 00:21:05.952 00:21:05.952 real 0m10.528s 00:21:05.952 user 0m27.894s 00:21:05.952 sys 0m5.092s 00:21:05.952 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.952 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:05.952 ************************************ 00:21:05.952 END TEST nvmf_shutdown_tc4 00:21:05.952 ************************************ 00:21:05.952 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:05.952 00:21:05.952 real 0m43.351s 00:21:05.952 user 1m49.698s 00:21:05.952 sys 0m14.121s 00:21:05.952 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:05.952 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:05.952 ************************************ 00:21:05.952 END TEST nvmf_shutdown 00:21:05.952 ************************************ 00:21:05.952 09:05:21 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:05.952 09:05:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:05.952 09:05:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:05.952 09:05:21 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:05.952 ************************************ 00:21:05.952 START TEST nvmf_nsid 00:21:05.952 ************************************ 00:21:05.952 09:05:21 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:06.212 * Looking for test storage... 00:21:06.212 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:06.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.212 --rc genhtml_branch_coverage=1 00:21:06.212 --rc genhtml_function_coverage=1 00:21:06.212 --rc genhtml_legend=1 00:21:06.212 --rc geninfo_all_blocks=1 00:21:06.212 --rc geninfo_unexecuted_blocks=1 00:21:06.212 00:21:06.212 ' 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:06.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.212 --rc genhtml_branch_coverage=1 00:21:06.212 --rc genhtml_function_coverage=1 00:21:06.212 --rc genhtml_legend=1 00:21:06.212 --rc geninfo_all_blocks=1 00:21:06.212 --rc geninfo_unexecuted_blocks=1 00:21:06.212 00:21:06.212 ' 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:06.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.212 --rc genhtml_branch_coverage=1 00:21:06.212 --rc genhtml_function_coverage=1 00:21:06.212 --rc genhtml_legend=1 00:21:06.212 --rc geninfo_all_blocks=1 00:21:06.212 --rc geninfo_unexecuted_blocks=1 00:21:06.212 00:21:06.212 ' 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:06.212 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:06.212 --rc genhtml_branch_coverage=1 00:21:06.212 --rc genhtml_function_coverage=1 00:21:06.212 --rc genhtml_legend=1 00:21:06.212 --rc geninfo_all_blocks=1 00:21:06.212 --rc geninfo_unexecuted_blocks=1 00:21:06.212 00:21:06.212 ' 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:21:06.212 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@50 -- # : 0 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:21:06.213 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@54 -- # have_pci_nics=0 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@296 -- # prepare_net_devs 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # local -g is_hw=no 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@260 -- # remove_target_ns 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # xtrace_disable 00:21:06.213 09:05:22 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@131 -- # pci_devs=() 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@131 -- # local -a pci_devs 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@132 -- # pci_net_devs=() 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@133 -- # pci_drivers=() 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@133 -- # local -A pci_drivers 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@135 -- # net_devs=() 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@135 -- # local -ga net_devs 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@136 -- # e810=() 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@136 -- # local -ga e810 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@137 -- # x722=() 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@137 -- # local -ga x722 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@138 -- # mlx=() 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@138 -- # local -ga mlx 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:12.789 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.789 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:12.790 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:12.790 Found net devices under 0000:86:00.0: cvl_0_0 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:12.790 Found net devices under 0000:86:00.1: cvl_0_1 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # is_hw=yes 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@257 -- # create_target_ns 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@27 -- # local -gA dev_map 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@28 -- # local -g _dev 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # ips=() 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772161 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:21:12.790 10.0.0.1 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772162 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:21:12.790 10.0.0.2 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:21:12.790 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:21:12.791 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:21:12.791 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:21:12.791 09:05:27 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@38 -- # ping_ips 1 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # local dev=initiator0 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:21:12.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:12.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.438 ms 00:21:12.791 00:21:12.791 --- 10.0.0.1 ping statistics --- 00:21:12.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.791 rtt min/avg/max/mdev = 0.438/0.438/0.438/0.000 ms 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # local dev=target0 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:21:12.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:12.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:21:12.791 00:21:12.791 --- 10.0.0.2 ping statistics --- 00:21:12.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:12.791 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # (( pair++ )) 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@270 -- # return 0 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # local dev=initiator0 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # local dev=initiator1 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # return 1 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # dev= 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@169 -- # return 0 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # local dev=target0 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:21:12.791 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # get_net_dev target1 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # local dev=target1 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # return 1 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # dev= 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@169 -- # return 0 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # nvmfpid=2401459 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@329 -- # waitforlisten 2401459 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2401459 ']' 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:12.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:12.792 [2024-11-20 09:05:28.207865] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:21:12.792 [2024-11-20 09:05:28.207916] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:12.792 [2024-11-20 09:05:28.285609] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.792 [2024-11-20 09:05:28.328221] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:12.792 [2024-11-20 09:05:28.328256] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:12.792 [2024-11-20 09:05:28.328262] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:12.792 [2024-11-20 09:05:28.328268] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:12.792 [2024-11-20 09:05:28.328273] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:12.792 [2024-11-20 09:05:28.328856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=2401605 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=817d9b98-ecb7-45cc-bbd2-2ab1182f3f9d 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=eaaf8b7a-e40e-4d3a-a3dc-8c323e2147ef 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=c0ce4ac9-7d63-4580-b4af-ad2b2bab0278 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:12.792 null0 00:21:12.792 null1 00:21:12.792 null2 00:21:12.792 [2024-11-20 09:05:28.517132] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:12.792 [2024-11-20 09:05:28.518311] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:21:12.792 [2024-11-20 09:05:28.518355] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2401605 ] 00:21:12.792 [2024-11-20 09:05:28.541335] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 2401605 /var/tmp/tgt2.sock 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 2401605 ']' 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:21:12.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:12.792 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:12.792 [2024-11-20 09:05:28.592973] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:12.792 [2024-11-20 09:05:28.634916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:13.053 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:13.053 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:21:13.053 09:05:28 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:21:13.312 [2024-11-20 09:05:29.164400] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:13.312 [2024-11-20 09:05:29.180513] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:21:13.312 nvme0n1 nvme0n2 00:21:13.312 nvme1n1 00:21:13.312 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:21:13.312 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:21:13.312 09:05:29 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:14.248 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:21:14.248 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:21:14.248 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:21:14.248 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:21:14.248 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:21:14.248 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:21:14.508 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:21:14.508 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:14.508 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:14.508 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:14.508 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:21:14.508 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:21:14.508 09:05:30 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 817d9b98-ecb7-45cc-bbd2-2ab1182f3f9d 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@538 -- # tr -d - 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=817d9b98ecb745ccbbd22ab1182f3f9d 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 817D9B98ECB745CCBBD22AB1182F3F9D 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 817D9B98ECB745CCBBD22AB1182F3F9D == \8\1\7\D\9\B\9\8\E\C\B\7\4\5\C\C\B\B\D\2\2\A\B\1\1\8\2\F\3\F\9\D ]] 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid eaaf8b7a-e40e-4d3a-a3dc-8c323e2147ef 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@538 -- # tr -d - 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=eaaf8b7ae40e4d3aa3dc8c323e2147ef 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo EAAF8B7AE40E4D3AA3DC8C323E2147EF 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ EAAF8B7AE40E4D3AA3DC8C323E2147EF == \E\A\A\F\8\B\7\A\E\4\0\E\4\D\3\A\A\3\D\C\8\C\3\2\3\E\2\1\4\7\E\F ]] 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid c0ce4ac9-7d63-4580-b4af-ad2b2bab0278 00:21:15.445 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@538 -- # tr -d - 00:21:15.704 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:21:15.704 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:21:15.704 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:21:15.704 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:21:15.704 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=c0ce4ac97d634580b4afad2b2bab0278 00:21:15.704 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo C0CE4AC97D634580B4AFAD2B2BAB0278 00:21:15.704 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ C0CE4AC97D634580B4AFAD2B2BAB0278 == \C\0\C\E\4\A\C\9\7\D\6\3\4\5\8\0\B\4\A\F\A\D\2\B\2\B\A\B\0\2\7\8 ]] 00:21:15.704 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:21:15.704 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:21:15.704 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:21:15.704 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 2401605 00:21:15.704 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2401605 ']' 00:21:15.704 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2401605 00:21:15.704 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:15.704 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:15.705 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2401605 00:21:15.964 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:15.964 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:15.964 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2401605' 00:21:15.964 killing process with pid 2401605 00:21:15.964 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2401605 00:21:15.964 09:05:31 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2401605 00:21:16.224 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:21:16.224 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@335 -- # nvmfcleanup 00:21:16.224 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@99 -- # sync 00:21:16.224 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:21:16.224 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@102 -- # set +e 00:21:16.224 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@103 -- # for i in {1..20} 00:21:16.224 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:21:16.224 rmmod nvme_tcp 00:21:16.224 rmmod nvme_fabrics 00:21:16.224 rmmod nvme_keyring 00:21:16.224 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:21:16.224 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # set -e 00:21:16.224 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # return 0 00:21:16.224 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # '[' -n 2401459 ']' 00:21:16.224 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@337 -- # killprocess 2401459 00:21:16.224 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 2401459 ']' 00:21:16.224 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 2401459 00:21:16.224 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:21:16.224 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:16.224 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2401459 00:21:16.224 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:16.224 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:16.224 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2401459' 00:21:16.224 killing process with pid 2401459 00:21:16.224 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 2401459 00:21:16.224 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 2401459 00:21:16.483 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:21:16.483 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@342 -- # nvmf_fini 00:21:16.483 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@264 -- # local dev 00:21:16.483 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@267 -- # remove_target_ns 00:21:16.483 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:16.483 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:16.483 09:05:32 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:18.386 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@268 -- # delete_main_bridge 00:21:18.386 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:18.386 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@130 -- # return 0 00:21:18.386 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:18.386 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:21:18.386 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:21:18.386 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:21:18.386 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:21:18.386 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:21:18.386 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:21:18.386 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:21:18.386 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:18.386 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:21:18.386 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:21:18.386 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:21:18.386 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:21:18.386 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:21:18.386 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:21:18.386 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:21:18.645 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:21:18.645 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@41 -- # _dev=0 00:21:18.645 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@41 -- # dev_map=() 00:21:18.645 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@284 -- # iptr 00:21:18.645 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@542 -- # iptables-save 00:21:18.645 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:21:18.645 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@542 -- # iptables-restore 00:21:18.645 00:21:18.645 real 0m12.514s 00:21:18.645 user 0m9.818s 00:21:18.645 sys 0m5.530s 00:21:18.645 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:18.645 09:05:34 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:21:18.645 ************************************ 00:21:18.645 END TEST nvmf_nsid 00:21:18.645 ************************************ 00:21:18.645 09:05:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:21:18.645 00:21:18.645 real 12m7.474s 00:21:18.645 user 26m5.745s 00:21:18.645 sys 3m42.071s 00:21:18.645 09:05:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:18.645 09:05:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:18.645 ************************************ 00:21:18.645 END TEST nvmf_target_extra 00:21:18.645 ************************************ 00:21:18.645 09:05:34 nvmf_tcp -- nvmf/nvmf.sh@12 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:18.645 09:05:34 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:18.645 09:05:34 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:18.645 09:05:34 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:18.645 ************************************ 00:21:18.645 START TEST nvmf_host 00:21:18.645 ************************************ 00:21:18.645 09:05:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:18.645 * Looking for test storage... 00:21:18.645 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:18.645 09:05:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:18.645 09:05:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:21:18.645 09:05:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:18.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.903 --rc genhtml_branch_coverage=1 00:21:18.903 --rc genhtml_function_coverage=1 00:21:18.903 --rc genhtml_legend=1 00:21:18.903 --rc geninfo_all_blocks=1 00:21:18.903 --rc geninfo_unexecuted_blocks=1 00:21:18.903 00:21:18.903 ' 00:21:18.903 09:05:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:18.903 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.903 --rc genhtml_branch_coverage=1 00:21:18.903 --rc genhtml_function_coverage=1 00:21:18.903 --rc genhtml_legend=1 00:21:18.904 --rc geninfo_all_blocks=1 00:21:18.904 --rc geninfo_unexecuted_blocks=1 00:21:18.904 00:21:18.904 ' 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:18.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.904 --rc genhtml_branch_coverage=1 00:21:18.904 --rc genhtml_function_coverage=1 00:21:18.904 --rc genhtml_legend=1 00:21:18.904 --rc geninfo_all_blocks=1 00:21:18.904 --rc geninfo_unexecuted_blocks=1 00:21:18.904 00:21:18.904 ' 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:18.904 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:18.904 --rc genhtml_branch_coverage=1 00:21:18.904 --rc genhtml_function_coverage=1 00:21:18.904 --rc genhtml_legend=1 00:21:18.904 --rc geninfo_all_blocks=1 00:21:18.904 --rc geninfo_unexecuted_blocks=1 00:21:18.904 00:21:18.904 ' 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@50 -- # : 0 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:21:18.904 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:18.904 ************************************ 00:21:18.904 START TEST nvmf_aer 00:21:18.904 ************************************ 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:21:18.904 * Looking for test storage... 00:21:18.904 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:21:18.904 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:19.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.163 --rc genhtml_branch_coverage=1 00:21:19.163 --rc genhtml_function_coverage=1 00:21:19.163 --rc genhtml_legend=1 00:21:19.163 --rc geninfo_all_blocks=1 00:21:19.163 --rc geninfo_unexecuted_blocks=1 00:21:19.163 00:21:19.163 ' 00:21:19.163 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:19.163 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.163 --rc genhtml_branch_coverage=1 00:21:19.163 --rc genhtml_function_coverage=1 00:21:19.163 --rc genhtml_legend=1 00:21:19.163 --rc geninfo_all_blocks=1 00:21:19.164 --rc geninfo_unexecuted_blocks=1 00:21:19.164 00:21:19.164 ' 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:19.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.164 --rc genhtml_branch_coverage=1 00:21:19.164 --rc genhtml_function_coverage=1 00:21:19.164 --rc genhtml_legend=1 00:21:19.164 --rc geninfo_all_blocks=1 00:21:19.164 --rc geninfo_unexecuted_blocks=1 00:21:19.164 00:21:19.164 ' 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:19.164 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:19.164 --rc genhtml_branch_coverage=1 00:21:19.164 --rc genhtml_function_coverage=1 00:21:19.164 --rc genhtml_legend=1 00:21:19.164 --rc geninfo_all_blocks=1 00:21:19.164 --rc geninfo_unexecuted_blocks=1 00:21:19.164 00:21:19.164 ' 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@50 -- # : 0 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:21:19.164 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:21:19.164 09:05:34 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@54 -- # have_pci_nics=0 00:21:19.164 09:05:35 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:21:19.164 09:05:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:21:19.164 09:05:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:19.164 09:05:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # prepare_net_devs 00:21:19.164 09:05:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # local -g is_hw=no 00:21:19.164 09:05:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # remove_target_ns 00:21:19.164 09:05:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:19.164 09:05:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:19.164 09:05:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:19.164 09:05:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:21:19.164 09:05:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:21:19.164 09:05:35 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # xtrace_disable 00:21:19.164 09:05:35 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@131 -- # pci_devs=() 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@131 -- # local -a pci_devs 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@132 -- # pci_net_devs=() 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@133 -- # pci_drivers=() 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@133 -- # local -A pci_drivers 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@135 -- # net_devs=() 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@135 -- # local -ga net_devs 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@136 -- # e810=() 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@136 -- # local -ga e810 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@137 -- # x722=() 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@137 -- # local -ga x722 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@138 -- # mlx=() 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@138 -- # local -ga mlx 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:25.744 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:25.744 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # [[ up == up ]] 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:25.744 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:25.745 Found net devices under 0000:86:00.0: cvl_0_0 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # [[ up == up ]] 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:25.745 Found net devices under 0000:86:00.1: cvl_0_1 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # is_hw=yes 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@257 -- # create_target_ns 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@27 -- # local -gA dev_map 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@28 -- # local -g _dev 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@44 -- # ips=() 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@11 -- # local val=167772161 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:21:25.745 10.0.0.1 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@11 -- # local val=167772162 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:21:25.745 10.0.0.2 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@38 -- # ping_ips 1 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # local dev=initiator0 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:21:25.745 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:21:25.746 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:25.746 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.439 ms 00:21:25.746 00:21:25.746 --- 10.0.0.1 ping statistics --- 00:21:25.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.746 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # local dev=target0 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:21:25.746 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:25.746 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:21:25.746 00:21:25.746 --- 10.0.0.2 ping statistics --- 00:21:25.746 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:25.746 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # (( pair++ )) 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # return 0 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # local dev=initiator0 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # local dev=initiator1 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # return 1 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # dev= 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@169 -- # return 0 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # local dev=target0 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:25.746 09:05:40 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:25.746 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:25.746 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:25.746 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # get_net_dev target1 00:21:25.746 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # local dev=target1 00:21:25.746 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:21:25.746 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:21:25.746 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # return 1 00:21:25.746 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # dev= 00:21:25.746 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@169 -- # return 0 00:21:25.746 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:21:25.746 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:25.746 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:21:25.746 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:21:25.746 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:25.746 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:21:25.746 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:21:25.746 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:21:25.746 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:25.746 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:25.746 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:25.746 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # nvmfpid=2405818 00:21:25.746 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # waitforlisten 2405818 00:21:25.746 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:25.746 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 2405818 ']' 00:21:25.747 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:25.747 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:25.747 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:25.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:25.747 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:25.747 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:25.747 [2024-11-20 09:05:41.104360] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:21:25.747 [2024-11-20 09:05:41.104409] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:25.747 [2024-11-20 09:05:41.185690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:25.747 [2024-11-20 09:05:41.229185] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:25.747 [2024-11-20 09:05:41.229227] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:25.747 [2024-11-20 09:05:41.229234] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:25.747 [2024-11-20 09:05:41.229240] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:25.747 [2024-11-20 09:05:41.229245] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:25.747 [2024-11-20 09:05:41.230750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.747 [2024-11-20 09:05:41.230859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:25.747 [2024-11-20 09:05:41.231008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:25.747 [2024-11-20 09:05:41.231009] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:26.006 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:26.006 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:21:26.006 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:26.006 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:26.006 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:26.006 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:26.006 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:26.006 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.006 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:26.006 [2024-11-20 09:05:41.989345] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:26.006 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.006 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:21:26.006 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.006 09:05:41 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:26.006 Malloc0 00:21:26.006 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.006 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:21:26.006 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.006 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:26.006 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.006 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:26.006 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.006 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:26.006 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.006 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:26.006 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.006 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:26.006 [2024-11-20 09:05:42.044550] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:26.265 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.265 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:21:26.265 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.265 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:26.265 [ 00:21:26.265 { 00:21:26.265 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:26.265 "subtype": "Discovery", 00:21:26.265 "listen_addresses": [], 00:21:26.265 "allow_any_host": true, 00:21:26.265 "hosts": [] 00:21:26.265 }, 00:21:26.265 { 00:21:26.265 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:26.265 "subtype": "NVMe", 00:21:26.265 "listen_addresses": [ 00:21:26.265 { 00:21:26.265 "trtype": "TCP", 00:21:26.265 "adrfam": "IPv4", 00:21:26.265 "traddr": "10.0.0.2", 00:21:26.265 "trsvcid": "4420" 00:21:26.265 } 00:21:26.265 ], 00:21:26.265 "allow_any_host": true, 00:21:26.265 "hosts": [], 00:21:26.265 "serial_number": "SPDK00000000000001", 00:21:26.265 "model_number": "SPDK bdev Controller", 00:21:26.265 "max_namespaces": 2, 00:21:26.265 "min_cntlid": 1, 00:21:26.265 "max_cntlid": 65519, 00:21:26.265 "namespaces": [ 00:21:26.265 { 00:21:26.265 "nsid": 1, 00:21:26.265 "bdev_name": "Malloc0", 00:21:26.265 "name": "Malloc0", 00:21:26.265 "nguid": "A1876D0EB0494B22855F59ED2F167850", 00:21:26.265 "uuid": "a1876d0e-b049-4b22-855f-59ed2f167850" 00:21:26.265 } 00:21:26.265 ] 00:21:26.265 } 00:21:26.265 ] 00:21:26.265 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.265 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:21:26.265 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:21:26.265 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2406063 00:21:26.265 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:21:26.265 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:21:26.265 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:26.265 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:21:26.265 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:21:26.265 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:21:26.265 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:26.265 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:26.265 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:21:26.265 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:21:26.265 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:26.265 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:26.265 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:21:26.265 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:21:26.265 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:26.525 Malloc1 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:26.525 Asynchronous Event Request test 00:21:26.525 Attaching to 10.0.0.2 00:21:26.525 Attached to 10.0.0.2 00:21:26.525 Registering asynchronous event callbacks... 00:21:26.525 Starting namespace attribute notice tests for all controllers... 00:21:26.525 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:21:26.525 aer_cb - Changed Namespace 00:21:26.525 Cleaning up... 00:21:26.525 [ 00:21:26.525 { 00:21:26.525 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:26.525 "subtype": "Discovery", 00:21:26.525 "listen_addresses": [], 00:21:26.525 "allow_any_host": true, 00:21:26.525 "hosts": [] 00:21:26.525 }, 00:21:26.525 { 00:21:26.525 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:26.525 "subtype": "NVMe", 00:21:26.525 "listen_addresses": [ 00:21:26.525 { 00:21:26.525 "trtype": "TCP", 00:21:26.525 "adrfam": "IPv4", 00:21:26.525 "traddr": "10.0.0.2", 00:21:26.525 "trsvcid": "4420" 00:21:26.525 } 00:21:26.525 ], 00:21:26.525 "allow_any_host": true, 00:21:26.525 "hosts": [], 00:21:26.525 "serial_number": "SPDK00000000000001", 00:21:26.525 "model_number": "SPDK bdev Controller", 00:21:26.525 "max_namespaces": 2, 00:21:26.525 "min_cntlid": 1, 00:21:26.525 "max_cntlid": 65519, 00:21:26.525 "namespaces": [ 00:21:26.525 { 00:21:26.525 "nsid": 1, 00:21:26.525 "bdev_name": "Malloc0", 00:21:26.525 "name": "Malloc0", 00:21:26.525 "nguid": "A1876D0EB0494B22855F59ED2F167850", 00:21:26.525 "uuid": "a1876d0e-b049-4b22-855f-59ed2f167850" 00:21:26.525 }, 00:21:26.525 { 00:21:26.525 "nsid": 2, 00:21:26.525 "bdev_name": "Malloc1", 00:21:26.525 "name": "Malloc1", 00:21:26.525 "nguid": "8C0F9151CD8340CF93A463AEC7707BF8", 00:21:26.525 "uuid": "8c0f9151-cd83-40cf-93a4-63aec7707bf8" 00:21:26.525 } 00:21:26.525 ] 00:21:26.525 } 00:21:26.525 ] 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2406063 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # nvmfcleanup 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@99 -- # sync 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # set +e 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # for i in {1..20} 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:21:26.525 rmmod nvme_tcp 00:21:26.525 rmmod nvme_fabrics 00:21:26.525 rmmod nvme_keyring 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # set -e 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # return 0 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # '[' -n 2405818 ']' 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@337 -- # killprocess 2405818 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 2405818 ']' 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 2405818 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:26.525 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2405818 00:21:26.786 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:26.786 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:26.786 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2405818' 00:21:26.786 killing process with pid 2405818 00:21:26.786 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 2405818 00:21:26.786 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 2405818 00:21:26.786 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:21:26.786 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # nvmf_fini 00:21:26.786 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@264 -- # local dev 00:21:26.786 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@267 -- # remove_target_ns 00:21:26.786 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:26.786 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:26.786 09:05:42 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@268 -- # delete_main_bridge 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@130 -- # return 0 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@41 -- # _dev=0 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@41 -- # dev_map=() 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@284 -- # iptr 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@542 -- # iptables-save 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@542 -- # iptables-restore 00:21:29.325 00:21:29.325 real 0m10.051s 00:21:29.325 user 0m8.126s 00:21:29.325 sys 0m4.930s 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:21:29.325 ************************************ 00:21:29.325 END TEST nvmf_aer 00:21:29.325 ************************************ 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:29.325 ************************************ 00:21:29.325 START TEST nvmf_async_init 00:21:29.325 ************************************ 00:21:29.325 09:05:44 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:21:29.325 * Looking for test storage... 00:21:29.325 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:29.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:29.325 --rc genhtml_branch_coverage=1 00:21:29.325 --rc genhtml_function_coverage=1 00:21:29.325 --rc genhtml_legend=1 00:21:29.325 --rc geninfo_all_blocks=1 00:21:29.325 --rc geninfo_unexecuted_blocks=1 00:21:29.325 00:21:29.325 ' 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:29.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:29.325 --rc genhtml_branch_coverage=1 00:21:29.325 --rc genhtml_function_coverage=1 00:21:29.325 --rc genhtml_legend=1 00:21:29.325 --rc geninfo_all_blocks=1 00:21:29.325 --rc geninfo_unexecuted_blocks=1 00:21:29.325 00:21:29.325 ' 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:29.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:29.325 --rc genhtml_branch_coverage=1 00:21:29.325 --rc genhtml_function_coverage=1 00:21:29.325 --rc genhtml_legend=1 00:21:29.325 --rc geninfo_all_blocks=1 00:21:29.325 --rc geninfo_unexecuted_blocks=1 00:21:29.325 00:21:29.325 ' 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:29.325 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:29.325 --rc genhtml_branch_coverage=1 00:21:29.325 --rc genhtml_function_coverage=1 00:21:29.325 --rc genhtml_legend=1 00:21:29.325 --rc geninfo_all_blocks=1 00:21:29.325 --rc geninfo_unexecuted_blocks=1 00:21:29.325 00:21:29.325 ' 00:21:29.325 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@50 -- # : 0 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:21:29.326 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@54 -- # have_pci_nics=0 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=71ea39b5c5cf482eb80156537c438e56 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # prepare_net_devs 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # local -g is_hw=no 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # remove_target_ns 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # xtrace_disable 00:21:29.326 09:05:45 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:36.021 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:36.021 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@131 -- # pci_devs=() 00:21:36.021 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@131 -- # local -a pci_devs 00:21:36.021 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@132 -- # pci_net_devs=() 00:21:36.021 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:21:36.021 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@133 -- # pci_drivers=() 00:21:36.021 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@133 -- # local -A pci_drivers 00:21:36.021 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@135 -- # net_devs=() 00:21:36.021 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@135 -- # local -ga net_devs 00:21:36.021 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@136 -- # e810=() 00:21:36.021 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@136 -- # local -ga e810 00:21:36.021 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@137 -- # x722=() 00:21:36.021 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@137 -- # local -ga x722 00:21:36.021 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@138 -- # mlx=() 00:21:36.021 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@138 -- # local -ga mlx 00:21:36.021 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:36.021 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:36.021 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:36.021 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:36.021 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:36.021 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:36.021 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:36.021 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:36.021 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:36.021 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:36.021 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:36.021 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:36.021 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:21:36.021 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:36.022 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:36.022 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # [[ up == up ]] 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:36.022 Found net devices under 0000:86:00.0: cvl_0_0 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # [[ up == up ]] 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:36.022 Found net devices under 0000:86:00.1: cvl_0_1 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # is_hw=yes 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@257 -- # create_target_ns 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@27 -- # local -gA dev_map 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@28 -- # local -g _dev 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@44 -- # ips=() 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:21:36.022 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@11 -- # local val=167772161 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:21:36.023 10.0.0.1 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@11 -- # local val=167772162 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:21:36.023 10.0.0.2 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:21:36.023 09:05:50 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@38 -- # ping_ips 1 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # local dev=initiator0 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:21:36.023 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:36.023 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.358 ms 00:21:36.023 00:21:36.023 --- 10.0.0.1 ping statistics --- 00:21:36.023 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.023 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # local dev=target0 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:21:36.023 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:21:36.024 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:36.024 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:21:36.024 00:21:36.024 --- 10.0.0.2 ping statistics --- 00:21:36.024 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:36.024 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # (( pair++ )) 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # return 0 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # local dev=initiator0 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # local dev=initiator1 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # return 1 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # dev= 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@169 -- # return 0 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # local dev=target0 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # get_net_dev target1 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # local dev=target1 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # return 1 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # dev= 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@169 -- # return 0 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # nvmfpid=2409614 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # waitforlisten 2409614 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 2409614 ']' 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:36.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:36.024 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:36.025 [2024-11-20 09:05:51.252317] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:21:36.025 [2024-11-20 09:05:51.252364] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:36.025 [2024-11-20 09:05:51.333826] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.025 [2024-11-20 09:05:51.375693] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:36.025 [2024-11-20 09:05:51.375730] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:36.025 [2024-11-20 09:05:51.375737] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:36.025 [2024-11-20 09:05:51.375743] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:36.025 [2024-11-20 09:05:51.375748] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:36.025 [2024-11-20 09:05:51.376316] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:36.025 [2024-11-20 09:05:51.520462] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:36.025 null0 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 71ea39b5c5cf482eb80156537c438e56 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:36.025 [2024-11-20 09:05:51.564735] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:36.025 nvme0n1 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.025 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:36.025 [ 00:21:36.025 { 00:21:36.025 "name": "nvme0n1", 00:21:36.025 "aliases": [ 00:21:36.025 "71ea39b5-c5cf-482e-b801-56537c438e56" 00:21:36.025 ], 00:21:36.025 "product_name": "NVMe disk", 00:21:36.025 "block_size": 512, 00:21:36.025 "num_blocks": 2097152, 00:21:36.025 "uuid": "71ea39b5-c5cf-482e-b801-56537c438e56", 00:21:36.025 "numa_id": 1, 00:21:36.025 "assigned_rate_limits": { 00:21:36.025 "rw_ios_per_sec": 0, 00:21:36.025 "rw_mbytes_per_sec": 0, 00:21:36.025 "r_mbytes_per_sec": 0, 00:21:36.025 "w_mbytes_per_sec": 0 00:21:36.025 }, 00:21:36.025 "claimed": false, 00:21:36.025 "zoned": false, 00:21:36.025 "supported_io_types": { 00:21:36.025 "read": true, 00:21:36.025 "write": true, 00:21:36.025 "unmap": false, 00:21:36.025 "flush": true, 00:21:36.025 "reset": true, 00:21:36.025 "nvme_admin": true, 00:21:36.025 "nvme_io": true, 00:21:36.025 "nvme_io_md": false, 00:21:36.025 "write_zeroes": true, 00:21:36.025 "zcopy": false, 00:21:36.025 "get_zone_info": false, 00:21:36.025 "zone_management": false, 00:21:36.025 "zone_append": false, 00:21:36.025 "compare": true, 00:21:36.025 "compare_and_write": true, 00:21:36.025 "abort": true, 00:21:36.025 "seek_hole": false, 00:21:36.026 "seek_data": false, 00:21:36.026 "copy": true, 00:21:36.026 "nvme_iov_md": false 00:21:36.026 }, 00:21:36.026 "memory_domains": [ 00:21:36.026 { 00:21:36.026 "dma_device_id": "system", 00:21:36.026 "dma_device_type": 1 00:21:36.026 } 00:21:36.026 ], 00:21:36.026 "driver_specific": { 00:21:36.026 "nvme": [ 00:21:36.026 { 00:21:36.026 "trid": { 00:21:36.026 "trtype": "TCP", 00:21:36.026 "adrfam": "IPv4", 00:21:36.026 "traddr": "10.0.0.2", 00:21:36.026 "trsvcid": "4420", 00:21:36.026 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:36.026 }, 00:21:36.026 "ctrlr_data": { 00:21:36.026 "cntlid": 1, 00:21:36.026 "vendor_id": "0x8086", 00:21:36.026 "model_number": "SPDK bdev Controller", 00:21:36.026 "serial_number": "00000000000000000000", 00:21:36.026 "firmware_revision": "25.01", 00:21:36.026 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:36.026 "oacs": { 00:21:36.026 "security": 0, 00:21:36.026 "format": 0, 00:21:36.026 "firmware": 0, 00:21:36.026 "ns_manage": 0 00:21:36.026 }, 00:21:36.026 "multi_ctrlr": true, 00:21:36.026 "ana_reporting": false 00:21:36.026 }, 00:21:36.026 "vs": { 00:21:36.026 "nvme_version": "1.3" 00:21:36.026 }, 00:21:36.026 "ns_data": { 00:21:36.026 "id": 1, 00:21:36.026 "can_share": true 00:21:36.026 } 00:21:36.026 } 00:21:36.026 ], 00:21:36.026 "mp_policy": "active_passive" 00:21:36.026 } 00:21:36.026 } 00:21:36.026 ] 00:21:36.026 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.026 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:21:36.026 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.026 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:36.026 [2024-11-20 09:05:51.826296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:21:36.026 [2024-11-20 09:05:51.826355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x95e220 (9): Bad file descriptor 00:21:36.026 [2024-11-20 09:05:51.958033] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:21:36.026 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.026 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:36.026 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.026 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:36.026 [ 00:21:36.026 { 00:21:36.026 "name": "nvme0n1", 00:21:36.026 "aliases": [ 00:21:36.026 "71ea39b5-c5cf-482e-b801-56537c438e56" 00:21:36.026 ], 00:21:36.026 "product_name": "NVMe disk", 00:21:36.026 "block_size": 512, 00:21:36.026 "num_blocks": 2097152, 00:21:36.026 "uuid": "71ea39b5-c5cf-482e-b801-56537c438e56", 00:21:36.026 "numa_id": 1, 00:21:36.026 "assigned_rate_limits": { 00:21:36.026 "rw_ios_per_sec": 0, 00:21:36.026 "rw_mbytes_per_sec": 0, 00:21:36.026 "r_mbytes_per_sec": 0, 00:21:36.026 "w_mbytes_per_sec": 0 00:21:36.026 }, 00:21:36.026 "claimed": false, 00:21:36.026 "zoned": false, 00:21:36.026 "supported_io_types": { 00:21:36.026 "read": true, 00:21:36.026 "write": true, 00:21:36.026 "unmap": false, 00:21:36.026 "flush": true, 00:21:36.026 "reset": true, 00:21:36.026 "nvme_admin": true, 00:21:36.026 "nvme_io": true, 00:21:36.026 "nvme_io_md": false, 00:21:36.026 "write_zeroes": true, 00:21:36.026 "zcopy": false, 00:21:36.026 "get_zone_info": false, 00:21:36.026 "zone_management": false, 00:21:36.026 "zone_append": false, 00:21:36.026 "compare": true, 00:21:36.026 "compare_and_write": true, 00:21:36.026 "abort": true, 00:21:36.026 "seek_hole": false, 00:21:36.026 "seek_data": false, 00:21:36.026 "copy": true, 00:21:36.026 "nvme_iov_md": false 00:21:36.026 }, 00:21:36.026 "memory_domains": [ 00:21:36.026 { 00:21:36.026 "dma_device_id": "system", 00:21:36.026 "dma_device_type": 1 00:21:36.026 } 00:21:36.026 ], 00:21:36.026 "driver_specific": { 00:21:36.026 "nvme": [ 00:21:36.026 { 00:21:36.026 "trid": { 00:21:36.026 "trtype": "TCP", 00:21:36.026 "adrfam": "IPv4", 00:21:36.026 "traddr": "10.0.0.2", 00:21:36.026 "trsvcid": "4420", 00:21:36.026 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:36.026 }, 00:21:36.026 "ctrlr_data": { 00:21:36.026 "cntlid": 2, 00:21:36.026 "vendor_id": "0x8086", 00:21:36.026 "model_number": "SPDK bdev Controller", 00:21:36.026 "serial_number": "00000000000000000000", 00:21:36.026 "firmware_revision": "25.01", 00:21:36.026 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:36.026 "oacs": { 00:21:36.026 "security": 0, 00:21:36.026 "format": 0, 00:21:36.026 "firmware": 0, 00:21:36.026 "ns_manage": 0 00:21:36.026 }, 00:21:36.026 "multi_ctrlr": true, 00:21:36.026 "ana_reporting": false 00:21:36.026 }, 00:21:36.026 "vs": { 00:21:36.026 "nvme_version": "1.3" 00:21:36.026 }, 00:21:36.026 "ns_data": { 00:21:36.026 "id": 1, 00:21:36.026 "can_share": true 00:21:36.026 } 00:21:36.026 } 00:21:36.026 ], 00:21:36.026 "mp_policy": "active_passive" 00:21:36.026 } 00:21:36.026 } 00:21:36.026 ] 00:21:36.026 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.026 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:36.026 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.026 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:36.026 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.026 09:05:51 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:21:36.026 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.LyUaPPsf7D 00:21:36.026 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:21:36.026 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.LyUaPPsf7D 00:21:36.026 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.LyUaPPsf7D 00:21:36.026 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.026 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:36.026 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.026 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:21:36.026 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.026 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:36.026 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.026 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:21:36.026 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.026 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:36.026 [2024-11-20 09:05:52.030908] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:21:36.026 [2024-11-20 09:05:52.031028] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:36.026 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.026 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:21:36.026 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.026 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:36.026 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.026 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:21:36.026 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.027 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:36.027 [2024-11-20 09:05:52.046963] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:21:36.286 nvme0n1 00:21:36.286 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.286 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:21:36.286 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.286 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:36.286 [ 00:21:36.286 { 00:21:36.286 "name": "nvme0n1", 00:21:36.286 "aliases": [ 00:21:36.286 "71ea39b5-c5cf-482e-b801-56537c438e56" 00:21:36.286 ], 00:21:36.286 "product_name": "NVMe disk", 00:21:36.286 "block_size": 512, 00:21:36.286 "num_blocks": 2097152, 00:21:36.286 "uuid": "71ea39b5-c5cf-482e-b801-56537c438e56", 00:21:36.286 "numa_id": 1, 00:21:36.286 "assigned_rate_limits": { 00:21:36.286 "rw_ios_per_sec": 0, 00:21:36.286 "rw_mbytes_per_sec": 0, 00:21:36.286 "r_mbytes_per_sec": 0, 00:21:36.286 "w_mbytes_per_sec": 0 00:21:36.286 }, 00:21:36.286 "claimed": false, 00:21:36.286 "zoned": false, 00:21:36.286 "supported_io_types": { 00:21:36.286 "read": true, 00:21:36.286 "write": true, 00:21:36.286 "unmap": false, 00:21:36.286 "flush": true, 00:21:36.286 "reset": true, 00:21:36.286 "nvme_admin": true, 00:21:36.286 "nvme_io": true, 00:21:36.286 "nvme_io_md": false, 00:21:36.286 "write_zeroes": true, 00:21:36.286 "zcopy": false, 00:21:36.286 "get_zone_info": false, 00:21:36.286 "zone_management": false, 00:21:36.286 "zone_append": false, 00:21:36.286 "compare": true, 00:21:36.286 "compare_and_write": true, 00:21:36.286 "abort": true, 00:21:36.286 "seek_hole": false, 00:21:36.286 "seek_data": false, 00:21:36.286 "copy": true, 00:21:36.286 "nvme_iov_md": false 00:21:36.286 }, 00:21:36.286 "memory_domains": [ 00:21:36.286 { 00:21:36.286 "dma_device_id": "system", 00:21:36.286 "dma_device_type": 1 00:21:36.286 } 00:21:36.286 ], 00:21:36.286 "driver_specific": { 00:21:36.286 "nvme": [ 00:21:36.286 { 00:21:36.286 "trid": { 00:21:36.286 "trtype": "TCP", 00:21:36.286 "adrfam": "IPv4", 00:21:36.286 "traddr": "10.0.0.2", 00:21:36.286 "trsvcid": "4421", 00:21:36.286 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:21:36.286 }, 00:21:36.286 "ctrlr_data": { 00:21:36.286 "cntlid": 3, 00:21:36.286 "vendor_id": "0x8086", 00:21:36.286 "model_number": "SPDK bdev Controller", 00:21:36.286 "serial_number": "00000000000000000000", 00:21:36.286 "firmware_revision": "25.01", 00:21:36.286 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:21:36.286 "oacs": { 00:21:36.286 "security": 0, 00:21:36.286 "format": 0, 00:21:36.286 "firmware": 0, 00:21:36.286 "ns_manage": 0 00:21:36.286 }, 00:21:36.286 "multi_ctrlr": true, 00:21:36.286 "ana_reporting": false 00:21:36.286 }, 00:21:36.286 "vs": { 00:21:36.286 "nvme_version": "1.3" 00:21:36.286 }, 00:21:36.286 "ns_data": { 00:21:36.286 "id": 1, 00:21:36.287 "can_share": true 00:21:36.287 } 00:21:36.287 } 00:21:36.287 ], 00:21:36.287 "mp_policy": "active_passive" 00:21:36.287 } 00:21:36.287 } 00:21:36.287 ] 00:21:36.287 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.287 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:21:36.287 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.287 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:36.287 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.287 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.LyUaPPsf7D 00:21:36.287 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:21:36.287 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:21:36.287 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # nvmfcleanup 00:21:36.287 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@99 -- # sync 00:21:36.287 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:21:36.287 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # set +e 00:21:36.287 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # for i in {1..20} 00:21:36.287 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:21:36.287 rmmod nvme_tcp 00:21:36.287 rmmod nvme_fabrics 00:21:36.287 rmmod nvme_keyring 00:21:36.287 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:21:36.287 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # set -e 00:21:36.287 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # return 0 00:21:36.287 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # '[' -n 2409614 ']' 00:21:36.287 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@337 -- # killprocess 2409614 00:21:36.287 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 2409614 ']' 00:21:36.287 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 2409614 00:21:36.287 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:21:36.287 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:36.287 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2409614 00:21:36.287 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:36.287 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:36.287 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2409614' 00:21:36.287 killing process with pid 2409614 00:21:36.287 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 2409614 00:21:36.287 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 2409614 00:21:36.547 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:21:36.547 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # nvmf_fini 00:21:36.547 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@264 -- # local dev 00:21:36.547 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@267 -- # remove_target_ns 00:21:36.547 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:36.547 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:36.547 09:05:52 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:38.454 09:05:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@268 -- # delete_main_bridge 00:21:38.454 09:05:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:38.454 09:05:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@130 -- # return 0 00:21:38.454 09:05:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:38.454 09:05:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:21:38.454 09:05:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:21:38.454 09:05:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:21:38.454 09:05:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:21:38.454 09:05:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:21:38.454 09:05:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:21:38.454 09:05:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:21:38.454 09:05:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:38.454 09:05:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:21:38.454 09:05:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:21:38.454 09:05:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:21:38.454 09:05:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:21:38.454 09:05:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:21:38.454 09:05:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:21:38.454 09:05:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:21:38.454 09:05:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:21:38.454 09:05:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@41 -- # _dev=0 00:21:38.454 09:05:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@41 -- # dev_map=() 00:21:38.454 09:05:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@284 -- # iptr 00:21:38.454 09:05:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@542 -- # iptables-save 00:21:38.454 09:05:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:21:38.454 09:05:54 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@542 -- # iptables-restore 00:21:38.714 00:21:38.714 real 0m9.580s 00:21:38.714 user 0m3.106s 00:21:38.714 sys 0m4.936s 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:21:38.714 ************************************ 00:21:38.714 END TEST nvmf_async_init 00:21:38.714 ************************************ 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@20 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.714 ************************************ 00:21:38.714 START TEST nvmf_identify 00:21:38.714 ************************************ 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:21:38.714 * Looking for test storage... 00:21:38.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:38.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.714 --rc genhtml_branch_coverage=1 00:21:38.714 --rc genhtml_function_coverage=1 00:21:38.714 --rc genhtml_legend=1 00:21:38.714 --rc geninfo_all_blocks=1 00:21:38.714 --rc geninfo_unexecuted_blocks=1 00:21:38.714 00:21:38.714 ' 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:38.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.714 --rc genhtml_branch_coverage=1 00:21:38.714 --rc genhtml_function_coverage=1 00:21:38.714 --rc genhtml_legend=1 00:21:38.714 --rc geninfo_all_blocks=1 00:21:38.714 --rc geninfo_unexecuted_blocks=1 00:21:38.714 00:21:38.714 ' 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:38.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.714 --rc genhtml_branch_coverage=1 00:21:38.714 --rc genhtml_function_coverage=1 00:21:38.714 --rc genhtml_legend=1 00:21:38.714 --rc geninfo_all_blocks=1 00:21:38.714 --rc geninfo_unexecuted_blocks=1 00:21:38.714 00:21:38.714 ' 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:38.714 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:38.714 --rc genhtml_branch_coverage=1 00:21:38.714 --rc genhtml_function_coverage=1 00:21:38.714 --rc genhtml_legend=1 00:21:38.714 --rc geninfo_all_blocks=1 00:21:38.714 --rc geninfo_unexecuted_blocks=1 00:21:38.714 00:21:38.714 ' 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:38.714 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:21:38.715 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:38.715 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:38.715 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:38.715 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:38.715 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:38.715 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:21:38.715 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:38.715 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:21:38.974 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:38.974 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:38.974 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:38.974 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:21:38.974 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:21:38.974 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:38.974 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:38.974 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:21:38.974 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:38.974 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:38.974 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:38.974 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@50 -- # : 0 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:21:38.975 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@54 -- # have_pci_nics=0 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # prepare_net_devs 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # local -g is_hw=no 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # remove_target_ns 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # xtrace_disable 00:21:38.975 09:05:54 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@131 -- # pci_devs=() 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@131 -- # local -a pci_devs 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@132 -- # pci_net_devs=() 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@133 -- # pci_drivers=() 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@133 -- # local -A pci_drivers 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@135 -- # net_devs=() 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@135 -- # local -ga net_devs 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@136 -- # e810=() 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@136 -- # local -ga e810 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@137 -- # x722=() 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@137 -- # local -ga x722 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@138 -- # mlx=() 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@138 -- # local -ga mlx 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:45.557 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:45.557 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # [[ up == up ]] 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:45.557 Found net devices under 0000:86:00.0: cvl_0_0 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # [[ up == up ]] 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:45.557 Found net devices under 0000:86:00.1: cvl_0_1 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # is_hw=yes 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@257 -- # create_target_ns 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@27 -- # local -gA dev_map 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@28 -- # local -g _dev 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # ips=() 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:21:45.557 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772161 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:21:45.558 10.0.0.1 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772162 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:21:45.558 10.0.0.2 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@38 -- # ping_ips 1 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=initiator0 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:21:45.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:45.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.442 ms 00:21:45.558 00:21:45.558 --- 10.0.0.1 ping statistics --- 00:21:45.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.558 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=target0 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:21:45.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:45.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:21:45.558 00:21:45.558 --- 10.0.0.2 ping statistics --- 00:21:45.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:45.558 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # (( pair++ )) 00:21:45.558 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # return 0 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=initiator0 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=initiator1 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # return 1 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev= 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@169 -- # return 0 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=target0 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev target1 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=target1 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # return 1 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev= 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@169 -- # return 0 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2413449 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2413449 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 2413449 ']' 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:45.559 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:45.559 09:06:00 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:45.559 [2024-11-20 09:06:00.897147] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:21:45.559 [2024-11-20 09:06:00.897192] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:45.559 [2024-11-20 09:06:00.973711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:45.559 [2024-11-20 09:06:01.016911] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:45.559 [2024-11-20 09:06:01.016954] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:45.559 [2024-11-20 09:06:01.016962] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:45.559 [2024-11-20 09:06:01.016969] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:45.559 [2024-11-20 09:06:01.016974] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:45.559 [2024-11-20 09:06:01.018550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:45.559 [2024-11-20 09:06:01.018678] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:45.559 [2024-11-20 09:06:01.018789] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.559 [2024-11-20 09:06:01.018790] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:45.559 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:45.559 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:21:45.559 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:45.559 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.559 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:45.559 [2024-11-20 09:06:01.118838] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:45.559 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.559 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:21:45.559 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:45.559 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:45.559 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:45.559 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.559 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:45.559 Malloc0 00:21:45.559 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.560 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:45.560 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.560 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:45.560 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.560 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:21:45.560 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.560 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:45.560 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.560 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:45.560 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.560 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:45.560 [2024-11-20 09:06:01.214329] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:45.560 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.560 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:21:45.560 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.560 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:45.560 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.560 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:21:45.560 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.560 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:45.560 [ 00:21:45.560 { 00:21:45.560 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:21:45.560 "subtype": "Discovery", 00:21:45.560 "listen_addresses": [ 00:21:45.560 { 00:21:45.560 "trtype": "TCP", 00:21:45.560 "adrfam": "IPv4", 00:21:45.560 "traddr": "10.0.0.2", 00:21:45.560 "trsvcid": "4420" 00:21:45.560 } 00:21:45.560 ], 00:21:45.560 "allow_any_host": true, 00:21:45.560 "hosts": [] 00:21:45.560 }, 00:21:45.560 { 00:21:45.560 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:21:45.560 "subtype": "NVMe", 00:21:45.560 "listen_addresses": [ 00:21:45.560 { 00:21:45.560 "trtype": "TCP", 00:21:45.560 "adrfam": "IPv4", 00:21:45.560 "traddr": "10.0.0.2", 00:21:45.560 "trsvcid": "4420" 00:21:45.560 } 00:21:45.560 ], 00:21:45.560 "allow_any_host": true, 00:21:45.560 "hosts": [], 00:21:45.560 "serial_number": "SPDK00000000000001", 00:21:45.560 "model_number": "SPDK bdev Controller", 00:21:45.560 "max_namespaces": 32, 00:21:45.560 "min_cntlid": 1, 00:21:45.560 "max_cntlid": 65519, 00:21:45.560 "namespaces": [ 00:21:45.560 { 00:21:45.560 "nsid": 1, 00:21:45.560 "bdev_name": "Malloc0", 00:21:45.560 "name": "Malloc0", 00:21:45.560 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:21:45.560 "eui64": "ABCDEF0123456789", 00:21:45.560 "uuid": "8997ddb7-7bdb-49a5-9dd2-00c64e86a0c0" 00:21:45.560 } 00:21:45.560 ] 00:21:45.560 } 00:21:45.560 ] 00:21:45.560 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.560 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:21:45.560 [2024-11-20 09:06:01.268468] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:21:45.560 [2024-11-20 09:06:01.268519] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2413498 ] 00:21:45.560 [2024-11-20 09:06:01.306930] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:21:45.560 [2024-11-20 09:06:01.310999] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:45.560 [2024-11-20 09:06:01.311006] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:45.560 [2024-11-20 09:06:01.311016] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:45.560 [2024-11-20 09:06:01.311026] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:45.560 [2024-11-20 09:06:01.315243] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:21:45.560 [2024-11-20 09:06:01.315278] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1981690 0 00:21:45.560 [2024-11-20 09:06:01.322964] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:45.560 [2024-11-20 09:06:01.322980] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:45.560 [2024-11-20 09:06:01.322985] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:45.560 [2024-11-20 09:06:01.322989] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:45.560 [2024-11-20 09:06:01.323020] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.560 [2024-11-20 09:06:01.323026] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.560 [2024-11-20 09:06:01.323030] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1981690) 00:21:45.560 [2024-11-20 09:06:01.323042] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:45.560 [2024-11-20 09:06:01.323060] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3100, cid 0, qid 0 00:21:45.560 [2024-11-20 09:06:01.329957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.560 [2024-11-20 09:06:01.329968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.560 [2024-11-20 09:06:01.329971] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.560 [2024-11-20 09:06:01.329976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3100) on tqpair=0x1981690 00:21:45.560 [2024-11-20 09:06:01.329985] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:45.560 [2024-11-20 09:06:01.329991] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:21:45.560 [2024-11-20 09:06:01.329996] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:21:45.560 [2024-11-20 09:06:01.330009] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.560 [2024-11-20 09:06:01.330013] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.560 [2024-11-20 09:06:01.330016] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1981690) 00:21:45.560 [2024-11-20 09:06:01.330024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.560 [2024-11-20 09:06:01.330038] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3100, cid 0, qid 0 00:21:45.560 [2024-11-20 09:06:01.330158] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.560 [2024-11-20 09:06:01.330165] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.560 [2024-11-20 09:06:01.330168] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.560 [2024-11-20 09:06:01.330171] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3100) on tqpair=0x1981690 00:21:45.560 [2024-11-20 09:06:01.330177] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:21:45.560 [2024-11-20 09:06:01.330184] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:21:45.560 [2024-11-20 09:06:01.330190] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.560 [2024-11-20 09:06:01.330196] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.560 [2024-11-20 09:06:01.330200] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1981690) 00:21:45.560 [2024-11-20 09:06:01.330206] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.560 [2024-11-20 09:06:01.330216] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3100, cid 0, qid 0 00:21:45.560 [2024-11-20 09:06:01.330305] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.560 [2024-11-20 09:06:01.330310] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.560 [2024-11-20 09:06:01.330314] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.560 [2024-11-20 09:06:01.330317] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3100) on tqpair=0x1981690 00:21:45.560 [2024-11-20 09:06:01.330322] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:21:45.560 [2024-11-20 09:06:01.330328] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:45.560 [2024-11-20 09:06:01.330334] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.560 [2024-11-20 09:06:01.330338] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.560 [2024-11-20 09:06:01.330341] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1981690) 00:21:45.560 [2024-11-20 09:06:01.330347] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.560 [2024-11-20 09:06:01.330357] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3100, cid 0, qid 0 00:21:45.560 [2024-11-20 09:06:01.330456] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.560 [2024-11-20 09:06:01.330464] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.560 [2024-11-20 09:06:01.330467] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.560 [2024-11-20 09:06:01.330470] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3100) on tqpair=0x1981690 00:21:45.560 [2024-11-20 09:06:01.330475] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:45.560 [2024-11-20 09:06:01.330483] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.560 [2024-11-20 09:06:01.330487] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.560 [2024-11-20 09:06:01.330491] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1981690) 00:21:45.560 [2024-11-20 09:06:01.330497] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.560 [2024-11-20 09:06:01.330506] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3100, cid 0, qid 0 00:21:45.561 [2024-11-20 09:06:01.330573] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.561 [2024-11-20 09:06:01.330579] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.561 [2024-11-20 09:06:01.330582] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.561 [2024-11-20 09:06:01.330585] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3100) on tqpair=0x1981690 00:21:45.561 [2024-11-20 09:06:01.330589] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:45.561 [2024-11-20 09:06:01.330594] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:45.561 [2024-11-20 09:06:01.330601] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:45.561 [2024-11-20 09:06:01.330709] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:21:45.561 [2024-11-20 09:06:01.330716] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:45.561 [2024-11-20 09:06:01.330723] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.561 [2024-11-20 09:06:01.330727] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.561 [2024-11-20 09:06:01.330730] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1981690) 00:21:45.561 [2024-11-20 09:06:01.330736] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.561 [2024-11-20 09:06:01.330747] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3100, cid 0, qid 0 00:21:45.561 [2024-11-20 09:06:01.330861] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.561 [2024-11-20 09:06:01.330867] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.561 [2024-11-20 09:06:01.330870] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.561 [2024-11-20 09:06:01.330874] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3100) on tqpair=0x1981690 00:21:45.561 [2024-11-20 09:06:01.330878] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:45.561 [2024-11-20 09:06:01.330886] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.561 [2024-11-20 09:06:01.330890] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.561 [2024-11-20 09:06:01.330893] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1981690) 00:21:45.561 [2024-11-20 09:06:01.330899] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.561 [2024-11-20 09:06:01.330908] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3100, cid 0, qid 0 00:21:45.561 [2024-11-20 09:06:01.331013] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.561 [2024-11-20 09:06:01.331020] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.561 [2024-11-20 09:06:01.331023] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.561 [2024-11-20 09:06:01.331027] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3100) on tqpair=0x1981690 00:21:45.561 [2024-11-20 09:06:01.331030] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:45.561 [2024-11-20 09:06:01.331035] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:45.561 [2024-11-20 09:06:01.331042] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:21:45.561 [2024-11-20 09:06:01.331054] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:45.561 [2024-11-20 09:06:01.331062] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.561 [2024-11-20 09:06:01.331066] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1981690) 00:21:45.561 [2024-11-20 09:06:01.331072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.561 [2024-11-20 09:06:01.331083] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3100, cid 0, qid 0 00:21:45.561 [2024-11-20 09:06:01.331183] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:45.561 [2024-11-20 09:06:01.331189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:45.561 [2024-11-20 09:06:01.331192] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:45.561 [2024-11-20 09:06:01.331196] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1981690): datao=0, datal=4096, cccid=0 00:21:45.561 [2024-11-20 09:06:01.331205] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19e3100) on tqpair(0x1981690): expected_datao=0, payload_size=4096 00:21:45.561 [2024-11-20 09:06:01.331210] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.561 [2024-11-20 09:06:01.331221] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:45.561 [2024-11-20 09:06:01.331225] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:45.561 [2024-11-20 09:06:01.374956] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.561 [2024-11-20 09:06:01.374968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.561 [2024-11-20 09:06:01.374971] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.561 [2024-11-20 09:06:01.374975] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3100) on tqpair=0x1981690 00:21:45.561 [2024-11-20 09:06:01.374983] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:21:45.561 [2024-11-20 09:06:01.374988] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:21:45.561 [2024-11-20 09:06:01.374992] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:21:45.561 [2024-11-20 09:06:01.375000] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:21:45.561 [2024-11-20 09:06:01.375005] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:21:45.561 [2024-11-20 09:06:01.375009] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:21:45.561 [2024-11-20 09:06:01.375020] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:45.561 [2024-11-20 09:06:01.375026] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.561 [2024-11-20 09:06:01.375030] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.561 [2024-11-20 09:06:01.375034] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1981690) 00:21:45.561 [2024-11-20 09:06:01.375041] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:45.561 [2024-11-20 09:06:01.375053] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3100, cid 0, qid 0 00:21:45.561 [2024-11-20 09:06:01.375179] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.561 [2024-11-20 09:06:01.375185] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.561 [2024-11-20 09:06:01.375188] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.561 [2024-11-20 09:06:01.375191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3100) on tqpair=0x1981690 00:21:45.561 [2024-11-20 09:06:01.375198] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.561 [2024-11-20 09:06:01.375201] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.561 [2024-11-20 09:06:01.375205] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1981690) 00:21:45.561 [2024-11-20 09:06:01.375210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.561 [2024-11-20 09:06:01.375216] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.561 [2024-11-20 09:06:01.375219] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.561 [2024-11-20 09:06:01.375222] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1981690) 00:21:45.561 [2024-11-20 09:06:01.375228] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.561 [2024-11-20 09:06:01.375233] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.561 [2024-11-20 09:06:01.375236] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.561 [2024-11-20 09:06:01.375241] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1981690) 00:21:45.561 [2024-11-20 09:06:01.375247] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.561 [2024-11-20 09:06:01.375252] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.561 [2024-11-20 09:06:01.375255] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.561 [2024-11-20 09:06:01.375258] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.561 [2024-11-20 09:06:01.375263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.561 [2024-11-20 09:06:01.375268] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:45.561 [2024-11-20 09:06:01.375276] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:45.562 [2024-11-20 09:06:01.375281] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.562 [2024-11-20 09:06:01.375284] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1981690) 00:21:45.562 [2024-11-20 09:06:01.375290] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.562 [2024-11-20 09:06:01.375302] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3100, cid 0, qid 0 00:21:45.562 [2024-11-20 09:06:01.375306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3280, cid 1, qid 0 00:21:45.562 [2024-11-20 09:06:01.375310] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3400, cid 2, qid 0 00:21:45.562 [2024-11-20 09:06:01.375315] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.562 [2024-11-20 09:06:01.375319] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3700, cid 4, qid 0 00:21:45.562 [2024-11-20 09:06:01.375425] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.562 [2024-11-20 09:06:01.375431] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.562 [2024-11-20 09:06:01.375434] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.562 [2024-11-20 09:06:01.375438] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3700) on tqpair=0x1981690 00:21:45.562 [2024-11-20 09:06:01.375444] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:21:45.562 [2024-11-20 09:06:01.375449] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:21:45.562 [2024-11-20 09:06:01.375460] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.562 [2024-11-20 09:06:01.375463] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1981690) 00:21:45.562 [2024-11-20 09:06:01.375469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.562 [2024-11-20 09:06:01.375479] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3700, cid 4, qid 0 00:21:45.562 [2024-11-20 09:06:01.375590] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:45.562 [2024-11-20 09:06:01.375596] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:45.562 [2024-11-20 09:06:01.375599] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:45.562 [2024-11-20 09:06:01.375602] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1981690): datao=0, datal=4096, cccid=4 00:21:45.562 [2024-11-20 09:06:01.375606] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19e3700) on tqpair(0x1981690): expected_datao=0, payload_size=4096 00:21:45.562 [2024-11-20 09:06:01.375610] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.562 [2024-11-20 09:06:01.375618] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:45.562 [2024-11-20 09:06:01.375622] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:45.562 [2024-11-20 09:06:01.375631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.562 [2024-11-20 09:06:01.375636] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.562 [2024-11-20 09:06:01.375640] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.562 [2024-11-20 09:06:01.375643] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3700) on tqpair=0x1981690 00:21:45.562 [2024-11-20 09:06:01.375654] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:21:45.562 [2024-11-20 09:06:01.375675] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.562 [2024-11-20 09:06:01.375679] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1981690) 00:21:45.562 [2024-11-20 09:06:01.375685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.562 [2024-11-20 09:06:01.375692] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.562 [2024-11-20 09:06:01.375695] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.562 [2024-11-20 09:06:01.375698] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1981690) 00:21:45.562 [2024-11-20 09:06:01.375703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.562 [2024-11-20 09:06:01.375716] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3700, cid 4, qid 0 00:21:45.562 [2024-11-20 09:06:01.375721] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3880, cid 5, qid 0 00:21:45.562 [2024-11-20 09:06:01.375837] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:45.562 [2024-11-20 09:06:01.375843] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:45.562 [2024-11-20 09:06:01.375846] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:45.562 [2024-11-20 09:06:01.375849] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1981690): datao=0, datal=1024, cccid=4 00:21:45.562 [2024-11-20 09:06:01.375853] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19e3700) on tqpair(0x1981690): expected_datao=0, payload_size=1024 00:21:45.562 [2024-11-20 09:06:01.375857] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.562 [2024-11-20 09:06:01.375862] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:45.562 [2024-11-20 09:06:01.375865] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:45.562 [2024-11-20 09:06:01.375870] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.562 [2024-11-20 09:06:01.375875] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.562 [2024-11-20 09:06:01.375878] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.562 [2024-11-20 09:06:01.375882] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3880) on tqpair=0x1981690 00:21:45.562 [2024-11-20 09:06:01.416010] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.562 [2024-11-20 09:06:01.416024] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.562 [2024-11-20 09:06:01.416028] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.562 [2024-11-20 09:06:01.416032] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3700) on tqpair=0x1981690 00:21:45.562 [2024-11-20 09:06:01.416044] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.562 [2024-11-20 09:06:01.416048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1981690) 00:21:45.562 [2024-11-20 09:06:01.416055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.562 [2024-11-20 09:06:01.416071] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3700, cid 4, qid 0 00:21:45.562 [2024-11-20 09:06:01.416170] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:45.562 [2024-11-20 09:06:01.416176] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:45.562 [2024-11-20 09:06:01.416180] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:45.562 [2024-11-20 09:06:01.416183] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1981690): datao=0, datal=3072, cccid=4 00:21:45.562 [2024-11-20 09:06:01.416187] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19e3700) on tqpair(0x1981690): expected_datao=0, payload_size=3072 00:21:45.562 [2024-11-20 09:06:01.416191] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.562 [2024-11-20 09:06:01.416204] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:45.562 [2024-11-20 09:06:01.416208] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:45.562 [2024-11-20 09:06:01.457016] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.562 [2024-11-20 09:06:01.457027] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.562 [2024-11-20 09:06:01.457030] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.562 [2024-11-20 09:06:01.457034] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3700) on tqpair=0x1981690 00:21:45.562 [2024-11-20 09:06:01.457044] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.562 [2024-11-20 09:06:01.457048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1981690) 00:21:45.562 [2024-11-20 09:06:01.457055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.562 [2024-11-20 09:06:01.457071] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3700, cid 4, qid 0 00:21:45.562 [2024-11-20 09:06:01.457139] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:45.562 [2024-11-20 09:06:01.457145] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:45.562 [2024-11-20 09:06:01.457148] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:45.562 [2024-11-20 09:06:01.457151] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1981690): datao=0, datal=8, cccid=4 00:21:45.562 [2024-11-20 09:06:01.457155] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x19e3700) on tqpair(0x1981690): expected_datao=0, payload_size=8 00:21:45.562 [2024-11-20 09:06:01.457161] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.562 [2024-11-20 09:06:01.457167] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:45.562 [2024-11-20 09:06:01.457171] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:45.562 [2024-11-20 09:06:01.500958] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.562 [2024-11-20 09:06:01.500968] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.562 [2024-11-20 09:06:01.500971] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.562 [2024-11-20 09:06:01.500974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3700) on tqpair=0x1981690 00:21:45.562 ===================================================== 00:21:45.562 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:21:45.562 ===================================================== 00:21:45.562 Controller Capabilities/Features 00:21:45.562 ================================ 00:21:45.562 Vendor ID: 0000 00:21:45.562 Subsystem Vendor ID: 0000 00:21:45.562 Serial Number: .................... 00:21:45.562 Model Number: ........................................ 00:21:45.562 Firmware Version: 25.01 00:21:45.562 Recommended Arb Burst: 0 00:21:45.562 IEEE OUI Identifier: 00 00 00 00:21:45.562 Multi-path I/O 00:21:45.562 May have multiple subsystem ports: No 00:21:45.562 May have multiple controllers: No 00:21:45.562 Associated with SR-IOV VF: No 00:21:45.562 Max Data Transfer Size: 131072 00:21:45.562 Max Number of Namespaces: 0 00:21:45.562 Max Number of I/O Queues: 1024 00:21:45.562 NVMe Specification Version (VS): 1.3 00:21:45.562 NVMe Specification Version (Identify): 1.3 00:21:45.562 Maximum Queue Entries: 128 00:21:45.562 Contiguous Queues Required: Yes 00:21:45.562 Arbitration Mechanisms Supported 00:21:45.562 Weighted Round Robin: Not Supported 00:21:45.562 Vendor Specific: Not Supported 00:21:45.562 Reset Timeout: 15000 ms 00:21:45.562 Doorbell Stride: 4 bytes 00:21:45.563 NVM Subsystem Reset: Not Supported 00:21:45.563 Command Sets Supported 00:21:45.563 NVM Command Set: Supported 00:21:45.563 Boot Partition: Not Supported 00:21:45.563 Memory Page Size Minimum: 4096 bytes 00:21:45.563 Memory Page Size Maximum: 4096 bytes 00:21:45.563 Persistent Memory Region: Not Supported 00:21:45.563 Optional Asynchronous Events Supported 00:21:45.563 Namespace Attribute Notices: Not Supported 00:21:45.563 Firmware Activation Notices: Not Supported 00:21:45.563 ANA Change Notices: Not Supported 00:21:45.563 PLE Aggregate Log Change Notices: Not Supported 00:21:45.563 LBA Status Info Alert Notices: Not Supported 00:21:45.563 EGE Aggregate Log Change Notices: Not Supported 00:21:45.563 Normal NVM Subsystem Shutdown event: Not Supported 00:21:45.563 Zone Descriptor Change Notices: Not Supported 00:21:45.563 Discovery Log Change Notices: Supported 00:21:45.563 Controller Attributes 00:21:45.563 128-bit Host Identifier: Not Supported 00:21:45.563 Non-Operational Permissive Mode: Not Supported 00:21:45.563 NVM Sets: Not Supported 00:21:45.563 Read Recovery Levels: Not Supported 00:21:45.563 Endurance Groups: Not Supported 00:21:45.563 Predictable Latency Mode: Not Supported 00:21:45.563 Traffic Based Keep ALive: Not Supported 00:21:45.563 Namespace Granularity: Not Supported 00:21:45.563 SQ Associations: Not Supported 00:21:45.563 UUID List: Not Supported 00:21:45.563 Multi-Domain Subsystem: Not Supported 00:21:45.563 Fixed Capacity Management: Not Supported 00:21:45.563 Variable Capacity Management: Not Supported 00:21:45.563 Delete Endurance Group: Not Supported 00:21:45.563 Delete NVM Set: Not Supported 00:21:45.563 Extended LBA Formats Supported: Not Supported 00:21:45.563 Flexible Data Placement Supported: Not Supported 00:21:45.563 00:21:45.563 Controller Memory Buffer Support 00:21:45.563 ================================ 00:21:45.563 Supported: No 00:21:45.563 00:21:45.563 Persistent Memory Region Support 00:21:45.563 ================================ 00:21:45.563 Supported: No 00:21:45.563 00:21:45.563 Admin Command Set Attributes 00:21:45.563 ============================ 00:21:45.563 Security Send/Receive: Not Supported 00:21:45.563 Format NVM: Not Supported 00:21:45.563 Firmware Activate/Download: Not Supported 00:21:45.563 Namespace Management: Not Supported 00:21:45.563 Device Self-Test: Not Supported 00:21:45.563 Directives: Not Supported 00:21:45.563 NVMe-MI: Not Supported 00:21:45.563 Virtualization Management: Not Supported 00:21:45.563 Doorbell Buffer Config: Not Supported 00:21:45.563 Get LBA Status Capability: Not Supported 00:21:45.563 Command & Feature Lockdown Capability: Not Supported 00:21:45.563 Abort Command Limit: 1 00:21:45.563 Async Event Request Limit: 4 00:21:45.563 Number of Firmware Slots: N/A 00:21:45.563 Firmware Slot 1 Read-Only: N/A 00:21:45.563 Firmware Activation Without Reset: N/A 00:21:45.563 Multiple Update Detection Support: N/A 00:21:45.563 Firmware Update Granularity: No Information Provided 00:21:45.563 Per-Namespace SMART Log: No 00:21:45.563 Asymmetric Namespace Access Log Page: Not Supported 00:21:45.563 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:21:45.563 Command Effects Log Page: Not Supported 00:21:45.563 Get Log Page Extended Data: Supported 00:21:45.563 Telemetry Log Pages: Not Supported 00:21:45.563 Persistent Event Log Pages: Not Supported 00:21:45.563 Supported Log Pages Log Page: May Support 00:21:45.563 Commands Supported & Effects Log Page: Not Supported 00:21:45.563 Feature Identifiers & Effects Log Page:May Support 00:21:45.563 NVMe-MI Commands & Effects Log Page: May Support 00:21:45.563 Data Area 4 for Telemetry Log: Not Supported 00:21:45.563 Error Log Page Entries Supported: 128 00:21:45.563 Keep Alive: Not Supported 00:21:45.563 00:21:45.563 NVM Command Set Attributes 00:21:45.563 ========================== 00:21:45.563 Submission Queue Entry Size 00:21:45.563 Max: 1 00:21:45.563 Min: 1 00:21:45.563 Completion Queue Entry Size 00:21:45.563 Max: 1 00:21:45.563 Min: 1 00:21:45.563 Number of Namespaces: 0 00:21:45.563 Compare Command: Not Supported 00:21:45.563 Write Uncorrectable Command: Not Supported 00:21:45.563 Dataset Management Command: Not Supported 00:21:45.563 Write Zeroes Command: Not Supported 00:21:45.563 Set Features Save Field: Not Supported 00:21:45.563 Reservations: Not Supported 00:21:45.563 Timestamp: Not Supported 00:21:45.563 Copy: Not Supported 00:21:45.563 Volatile Write Cache: Not Present 00:21:45.563 Atomic Write Unit (Normal): 1 00:21:45.563 Atomic Write Unit (PFail): 1 00:21:45.563 Atomic Compare & Write Unit: 1 00:21:45.563 Fused Compare & Write: Supported 00:21:45.563 Scatter-Gather List 00:21:45.563 SGL Command Set: Supported 00:21:45.563 SGL Keyed: Supported 00:21:45.563 SGL Bit Bucket Descriptor: Not Supported 00:21:45.563 SGL Metadata Pointer: Not Supported 00:21:45.563 Oversized SGL: Not Supported 00:21:45.563 SGL Metadata Address: Not Supported 00:21:45.563 SGL Offset: Supported 00:21:45.563 Transport SGL Data Block: Not Supported 00:21:45.563 Replay Protected Memory Block: Not Supported 00:21:45.563 00:21:45.563 Firmware Slot Information 00:21:45.563 ========================= 00:21:45.563 Active slot: 0 00:21:45.563 00:21:45.563 00:21:45.563 Error Log 00:21:45.563 ========= 00:21:45.563 00:21:45.563 Active Namespaces 00:21:45.563 ================= 00:21:45.563 Discovery Log Page 00:21:45.563 ================== 00:21:45.563 Generation Counter: 2 00:21:45.563 Number of Records: 2 00:21:45.563 Record Format: 0 00:21:45.563 00:21:45.563 Discovery Log Entry 0 00:21:45.563 ---------------------- 00:21:45.563 Transport Type: 3 (TCP) 00:21:45.563 Address Family: 1 (IPv4) 00:21:45.563 Subsystem Type: 3 (Current Discovery Subsystem) 00:21:45.563 Entry Flags: 00:21:45.563 Duplicate Returned Information: 1 00:21:45.563 Explicit Persistent Connection Support for Discovery: 1 00:21:45.563 Transport Requirements: 00:21:45.563 Secure Channel: Not Required 00:21:45.563 Port ID: 0 (0x0000) 00:21:45.563 Controller ID: 65535 (0xffff) 00:21:45.563 Admin Max SQ Size: 128 00:21:45.563 Transport Service Identifier: 4420 00:21:45.563 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:21:45.563 Transport Address: 10.0.0.2 00:21:45.563 Discovery Log Entry 1 00:21:45.563 ---------------------- 00:21:45.563 Transport Type: 3 (TCP) 00:21:45.563 Address Family: 1 (IPv4) 00:21:45.563 Subsystem Type: 2 (NVM Subsystem) 00:21:45.563 Entry Flags: 00:21:45.563 Duplicate Returned Information: 0 00:21:45.563 Explicit Persistent Connection Support for Discovery: 0 00:21:45.563 Transport Requirements: 00:21:45.563 Secure Channel: Not Required 00:21:45.563 Port ID: 0 (0x0000) 00:21:45.563 Controller ID: 65535 (0xffff) 00:21:45.563 Admin Max SQ Size: 128 00:21:45.563 Transport Service Identifier: 4420 00:21:45.563 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:21:45.563 Transport Address: 10.0.0.2 [2024-11-20 09:06:01.501060] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:21:45.563 [2024-11-20 09:06:01.501070] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3100) on tqpair=0x1981690 00:21:45.563 [2024-11-20 09:06:01.501077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.563 [2024-11-20 09:06:01.501082] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3280) on tqpair=0x1981690 00:21:45.563 [2024-11-20 09:06:01.501086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.563 [2024-11-20 09:06:01.501090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3400) on tqpair=0x1981690 00:21:45.563 [2024-11-20 09:06:01.501094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.563 [2024-11-20 09:06:01.501100] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.563 [2024-11-20 09:06:01.501104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.563 [2024-11-20 09:06:01.501115] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.563 [2024-11-20 09:06:01.501118] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.563 [2024-11-20 09:06:01.501122] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.563 [2024-11-20 09:06:01.501128] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.563 [2024-11-20 09:06:01.501142] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.563 [2024-11-20 09:06:01.501204] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.563 [2024-11-20 09:06:01.501210] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.563 [2024-11-20 09:06:01.501213] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.501216] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.564 [2024-11-20 09:06:01.501222] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.501226] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.501229] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.564 [2024-11-20 09:06:01.501234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.564 [2024-11-20 09:06:01.501247] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.564 [2024-11-20 09:06:01.501323] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.564 [2024-11-20 09:06:01.501329] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.564 [2024-11-20 09:06:01.501332] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.501335] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.564 [2024-11-20 09:06:01.501339] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:21:45.564 [2024-11-20 09:06:01.501344] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:21:45.564 [2024-11-20 09:06:01.501352] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.501355] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.501359] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.564 [2024-11-20 09:06:01.501364] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.564 [2024-11-20 09:06:01.501374] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.564 [2024-11-20 09:06:01.501438] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.564 [2024-11-20 09:06:01.501444] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.564 [2024-11-20 09:06:01.501447] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.501450] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.564 [2024-11-20 09:06:01.501459] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.501462] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.501465] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.564 [2024-11-20 09:06:01.501471] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.564 [2024-11-20 09:06:01.501480] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.564 [2024-11-20 09:06:01.501550] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.564 [2024-11-20 09:06:01.501555] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.564 [2024-11-20 09:06:01.501558] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.501562] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.564 [2024-11-20 09:06:01.501571] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.501574] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.501577] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.564 [2024-11-20 09:06:01.501583] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.564 [2024-11-20 09:06:01.501593] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.564 [2024-11-20 09:06:01.501656] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.564 [2024-11-20 09:06:01.501662] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.564 [2024-11-20 09:06:01.501665] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.501668] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.564 [2024-11-20 09:06:01.501676] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.501679] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.501682] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.564 [2024-11-20 09:06:01.501688] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.564 [2024-11-20 09:06:01.501697] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.564 [2024-11-20 09:06:01.501765] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.564 [2024-11-20 09:06:01.501771] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.564 [2024-11-20 09:06:01.501774] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.501777] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.564 [2024-11-20 09:06:01.501785] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.501788] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.501791] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.564 [2024-11-20 09:06:01.501797] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.564 [2024-11-20 09:06:01.501806] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.564 [2024-11-20 09:06:01.501873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.564 [2024-11-20 09:06:01.501878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.564 [2024-11-20 09:06:01.501881] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.501885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.564 [2024-11-20 09:06:01.501893] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.501897] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.501900] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.564 [2024-11-20 09:06:01.501906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.564 [2024-11-20 09:06:01.501915] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.564 [2024-11-20 09:06:01.501985] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.564 [2024-11-20 09:06:01.501993] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.564 [2024-11-20 09:06:01.501996] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.502000] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.564 [2024-11-20 09:06:01.502008] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.502012] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.502015] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.564 [2024-11-20 09:06:01.502022] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.564 [2024-11-20 09:06:01.502034] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.564 [2024-11-20 09:06:01.502096] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.564 [2024-11-20 09:06:01.502102] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.564 [2024-11-20 09:06:01.502105] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.502109] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.564 [2024-11-20 09:06:01.502117] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.502121] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.502124] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.564 [2024-11-20 09:06:01.502130] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.564 [2024-11-20 09:06:01.502139] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.564 [2024-11-20 09:06:01.502206] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.564 [2024-11-20 09:06:01.502211] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.564 [2024-11-20 09:06:01.502214] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.502217] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.564 [2024-11-20 09:06:01.502226] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.502229] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.502232] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.564 [2024-11-20 09:06:01.502238] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.564 [2024-11-20 09:06:01.502248] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.564 [2024-11-20 09:06:01.502315] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.564 [2024-11-20 09:06:01.502321] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.564 [2024-11-20 09:06:01.502325] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.502328] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.564 [2024-11-20 09:06:01.502336] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.502339] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.502342] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.564 [2024-11-20 09:06:01.502348] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.564 [2024-11-20 09:06:01.502358] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.564 [2024-11-20 09:06:01.502419] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.564 [2024-11-20 09:06:01.502425] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.564 [2024-11-20 09:06:01.502430] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.502433] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.564 [2024-11-20 09:06:01.502442] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.502446] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.564 [2024-11-20 09:06:01.502449] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.564 [2024-11-20 09:06:01.502454] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.565 [2024-11-20 09:06:01.502464] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.565 [2024-11-20 09:06:01.502525] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.565 [2024-11-20 09:06:01.502530] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.565 [2024-11-20 09:06:01.502533] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.502536] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.565 [2024-11-20 09:06:01.502544] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.502549] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.502552] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.565 [2024-11-20 09:06:01.502557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.565 [2024-11-20 09:06:01.502567] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.565 [2024-11-20 09:06:01.502623] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.565 [2024-11-20 09:06:01.502629] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.565 [2024-11-20 09:06:01.502632] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.502635] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.565 [2024-11-20 09:06:01.502643] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.502646] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.502650] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.565 [2024-11-20 09:06:01.502655] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.565 [2024-11-20 09:06:01.502665] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.565 [2024-11-20 09:06:01.502735] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.565 [2024-11-20 09:06:01.502741] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.565 [2024-11-20 09:06:01.502744] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.502747] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.565 [2024-11-20 09:06:01.502755] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.502758] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.502762] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.565 [2024-11-20 09:06:01.502767] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.565 [2024-11-20 09:06:01.502777] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.565 [2024-11-20 09:06:01.502836] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.565 [2024-11-20 09:06:01.502842] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.565 [2024-11-20 09:06:01.502845] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.502850] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.565 [2024-11-20 09:06:01.502858] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.502861] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.502864] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.565 [2024-11-20 09:06:01.502870] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.565 [2024-11-20 09:06:01.502879] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.565 [2024-11-20 09:06:01.502938] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.565 [2024-11-20 09:06:01.502944] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.565 [2024-11-20 09:06:01.502951] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.502955] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.565 [2024-11-20 09:06:01.502963] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.502966] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.502969] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.565 [2024-11-20 09:06:01.502975] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.565 [2024-11-20 09:06:01.502985] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.565 [2024-11-20 09:06:01.503048] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.565 [2024-11-20 09:06:01.503054] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.565 [2024-11-20 09:06:01.503057] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.503060] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.565 [2024-11-20 09:06:01.503068] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.503072] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.503075] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.565 [2024-11-20 09:06:01.503081] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.565 [2024-11-20 09:06:01.503090] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.565 [2024-11-20 09:06:01.503157] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.565 [2024-11-20 09:06:01.503163] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.565 [2024-11-20 09:06:01.503166] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.503169] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.565 [2024-11-20 09:06:01.503178] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.503181] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.503184] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.565 [2024-11-20 09:06:01.503190] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.565 [2024-11-20 09:06:01.503200] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.565 [2024-11-20 09:06:01.503261] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.565 [2024-11-20 09:06:01.503267] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.565 [2024-11-20 09:06:01.503270] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.503273] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.565 [2024-11-20 09:06:01.503283] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.503287] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.503290] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.565 [2024-11-20 09:06:01.503296] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.565 [2024-11-20 09:06:01.503306] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.565 [2024-11-20 09:06:01.503371] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.565 [2024-11-20 09:06:01.503378] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.565 [2024-11-20 09:06:01.503381] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.503386] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.565 [2024-11-20 09:06:01.503394] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.503399] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.503403] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.565 [2024-11-20 09:06:01.503410] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.565 [2024-11-20 09:06:01.503421] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.565 [2024-11-20 09:06:01.503480] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.565 [2024-11-20 09:06:01.503487] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.565 [2024-11-20 09:06:01.503492] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.503496] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.565 [2024-11-20 09:06:01.503504] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.503508] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.503511] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.565 [2024-11-20 09:06:01.503517] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.565 [2024-11-20 09:06:01.503526] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.565 [2024-11-20 09:06:01.503586] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.565 [2024-11-20 09:06:01.503592] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.565 [2024-11-20 09:06:01.503595] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.503599] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.565 [2024-11-20 09:06:01.503607] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.503611] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.503615] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.565 [2024-11-20 09:06:01.503621] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.565 [2024-11-20 09:06:01.503632] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.565 [2024-11-20 09:06:01.503695] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.565 [2024-11-20 09:06:01.503701] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.565 [2024-11-20 09:06:01.503705] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.503708] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.565 [2024-11-20 09:06:01.503715] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.503721] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.565 [2024-11-20 09:06:01.503724] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.566 [2024-11-20 09:06:01.503730] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.566 [2024-11-20 09:06:01.503739] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.566 [2024-11-20 09:06:01.503799] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.566 [2024-11-20 09:06:01.503805] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.566 [2024-11-20 09:06:01.503809] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.566 [2024-11-20 09:06:01.503812] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.566 [2024-11-20 09:06:01.503820] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.566 [2024-11-20 09:06:01.503824] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.566 [2024-11-20 09:06:01.503827] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.566 [2024-11-20 09:06:01.503833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.566 [2024-11-20 09:06:01.503842] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.566 [2024-11-20 09:06:01.503902] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.566 [2024-11-20 09:06:01.503908] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.566 [2024-11-20 09:06:01.503911] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.566 [2024-11-20 09:06:01.503915] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.566 [2024-11-20 09:06:01.503923] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.566 [2024-11-20 09:06:01.503927] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.566 [2024-11-20 09:06:01.503930] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.566 [2024-11-20 09:06:01.503936] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.566 [2024-11-20 09:06:01.503945] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.566 [2024-11-20 09:06:01.504009] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.566 [2024-11-20 09:06:01.504016] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.566 [2024-11-20 09:06:01.504019] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.566 [2024-11-20 09:06:01.504023] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.566 [2024-11-20 09:06:01.504031] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.566 [2024-11-20 09:06:01.504035] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.566 [2024-11-20 09:06:01.504038] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.566 [2024-11-20 09:06:01.504044] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.566 [2024-11-20 09:06:01.504054] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.566 [2024-11-20 09:06:01.504117] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.566 [2024-11-20 09:06:01.504124] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.566 [2024-11-20 09:06:01.504127] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.566 [2024-11-20 09:06:01.504131] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.566 [2024-11-20 09:06:01.504139] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.566 [2024-11-20 09:06:01.504142] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.566 [2024-11-20 09:06:01.504147] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.566 [2024-11-20 09:06:01.504153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.566 [2024-11-20 09:06:01.504163] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.566 [2024-11-20 09:06:01.504228] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.566 [2024-11-20 09:06:01.504234] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.566 [2024-11-20 09:06:01.504237] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.566 [2024-11-20 09:06:01.504241] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.566 [2024-11-20 09:06:01.504250] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.566 [2024-11-20 09:06:01.504253] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.566 [2024-11-20 09:06:01.504257] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.566 [2024-11-20 09:06:01.504262] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.566 [2024-11-20 09:06:01.504271] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.566 [2024-11-20 09:06:01.504337] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.566 [2024-11-20 09:06:01.504344] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.566 [2024-11-20 09:06:01.504347] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.566 [2024-11-20 09:06:01.504350] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.566 [2024-11-20 09:06:01.504358] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.566 [2024-11-20 09:06:01.504362] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.566 [2024-11-20 09:06:01.504365] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.566 [2024-11-20 09:06:01.504371] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.566 [2024-11-20 09:06:01.504380] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.566 [2024-11-20 09:06:01.504456] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.566 [2024-11-20 09:06:01.504461] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.566 [2024-11-20 09:06:01.504464] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.566 [2024-11-20 09:06:01.504468] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.566 [2024-11-20 09:06:01.504476] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.566 [2024-11-20 09:06:01.504480] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.566 [2024-11-20 09:06:01.504483] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.566 [2024-11-20 09:06:01.504488] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.566 [2024-11-20 09:06:01.504497] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.566 [2024-11-20 09:06:01.504557] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.566 [2024-11-20 09:06:01.504562] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.566 [2024-11-20 09:06:01.504565] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.566 [2024-11-20 09:06:01.504569] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.566 [2024-11-20 09:06:01.504576] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.566 [2024-11-20 09:06:01.504580] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.566 [2024-11-20 09:06:01.504583] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.566 [2024-11-20 09:06:01.504591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.566 [2024-11-20 09:06:01.504601] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.566 [2024-11-20 09:06:01.504664] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.566 [2024-11-20 09:06:01.504670] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.566 [2024-11-20 09:06:01.504673] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.566 [2024-11-20 09:06:01.504676] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.566 [2024-11-20 09:06:01.504685] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.566 [2024-11-20 09:06:01.504688] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.566 [2024-11-20 09:06:01.504691] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.566 [2024-11-20 09:06:01.504697] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.566 [2024-11-20 09:06:01.504706] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.566 [2024-11-20 09:06:01.504772] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.566 [2024-11-20 09:06:01.504777] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.566 [2024-11-20 09:06:01.504781] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.566 [2024-11-20 09:06:01.504784] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.566 [2024-11-20 09:06:01.504792] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.566 [2024-11-20 09:06:01.504796] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.566 [2024-11-20 09:06:01.504799] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.566 [2024-11-20 09:06:01.504805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.566 [2024-11-20 09:06:01.504814] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.566 [2024-11-20 09:06:01.504877] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.567 [2024-11-20 09:06:01.504882] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.567 [2024-11-20 09:06:01.504885] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.567 [2024-11-20 09:06:01.504889] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.567 [2024-11-20 09:06:01.504897] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.567 [2024-11-20 09:06:01.504901] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.567 [2024-11-20 09:06:01.504904] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.567 [2024-11-20 09:06:01.504910] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.567 [2024-11-20 09:06:01.504919] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.567 [2024-11-20 09:06:01.508957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.567 [2024-11-20 09:06:01.508966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.567 [2024-11-20 09:06:01.508970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.567 [2024-11-20 09:06:01.508973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.567 [2024-11-20 09:06:01.508982] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.567 [2024-11-20 09:06:01.508986] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.567 [2024-11-20 09:06:01.508989] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1981690) 00:21:45.567 [2024-11-20 09:06:01.508995] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.567 [2024-11-20 09:06:01.509008] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x19e3580, cid 3, qid 0 00:21:45.567 [2024-11-20 09:06:01.509075] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.567 [2024-11-20 09:06:01.509081] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.567 [2024-11-20 09:06:01.509084] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.567 [2024-11-20 09:06:01.509087] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x19e3580) on tqpair=0x1981690 00:21:45.567 [2024-11-20 09:06:01.509094] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 7 milliseconds 00:21:45.567 00:21:45.567 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:21:45.567 [2024-11-20 09:06:01.545757] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:21:45.567 [2024-11-20 09:06:01.545795] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2413619 ] 00:21:45.567 [2024-11-20 09:06:01.587592] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:21:45.567 [2024-11-20 09:06:01.587638] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:21:45.567 [2024-11-20 09:06:01.587643] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:21:45.567 [2024-11-20 09:06:01.587655] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:21:45.567 [2024-11-20 09:06:01.587664] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:21:45.567 [2024-11-20 09:06:01.588073] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:21:45.567 [2024-11-20 09:06:01.588097] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x7e1690 0 00:21:45.830 [2024-11-20 09:06:01.601961] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:21:45.830 [2024-11-20 09:06:01.601974] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:21:45.830 [2024-11-20 09:06:01.601978] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:21:45.830 [2024-11-20 09:06:01.601981] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:21:45.830 [2024-11-20 09:06:01.602005] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.830 [2024-11-20 09:06:01.602010] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.830 [2024-11-20 09:06:01.602013] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7e1690) 00:21:45.830 [2024-11-20 09:06:01.602024] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:21:45.830 [2024-11-20 09:06:01.602041] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843100, cid 0, qid 0 00:21:45.830 [2024-11-20 09:06:01.609959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.830 [2024-11-20 09:06:01.609967] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.830 [2024-11-20 09:06:01.609971] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.830 [2024-11-20 09:06:01.609974] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843100) on tqpair=0x7e1690 00:21:45.830 [2024-11-20 09:06:01.609985] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:21:45.830 [2024-11-20 09:06:01.609991] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:21:45.830 [2024-11-20 09:06:01.609999] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:21:45.830 [2024-11-20 09:06:01.610009] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.830 [2024-11-20 09:06:01.610013] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.830 [2024-11-20 09:06:01.610016] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7e1690) 00:21:45.830 [2024-11-20 09:06:01.610023] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.830 [2024-11-20 09:06:01.610036] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843100, cid 0, qid 0 00:21:45.830 [2024-11-20 09:06:01.610173] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.830 [2024-11-20 09:06:01.610179] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.831 [2024-11-20 09:06:01.610182] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.831 [2024-11-20 09:06:01.610185] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843100) on tqpair=0x7e1690 00:21:45.831 [2024-11-20 09:06:01.610190] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:21:45.831 [2024-11-20 09:06:01.610196] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:21:45.831 [2024-11-20 09:06:01.610202] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.831 [2024-11-20 09:06:01.610206] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.831 [2024-11-20 09:06:01.610209] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7e1690) 00:21:45.831 [2024-11-20 09:06:01.610215] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.831 [2024-11-20 09:06:01.610225] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843100, cid 0, qid 0 00:21:45.831 [2024-11-20 09:06:01.610285] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.831 [2024-11-20 09:06:01.610290] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.831 [2024-11-20 09:06:01.610293] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.831 [2024-11-20 09:06:01.610297] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843100) on tqpair=0x7e1690 00:21:45.831 [2024-11-20 09:06:01.610301] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:21:45.831 [2024-11-20 09:06:01.610308] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:21:45.831 [2024-11-20 09:06:01.610314] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.831 [2024-11-20 09:06:01.610317] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.831 [2024-11-20 09:06:01.610320] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7e1690) 00:21:45.831 [2024-11-20 09:06:01.610326] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.831 [2024-11-20 09:06:01.610336] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843100, cid 0, qid 0 00:21:45.831 [2024-11-20 09:06:01.610399] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.831 [2024-11-20 09:06:01.610404] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.831 [2024-11-20 09:06:01.610407] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.831 [2024-11-20 09:06:01.610410] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843100) on tqpair=0x7e1690 00:21:45.831 [2024-11-20 09:06:01.610415] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:21:45.831 [2024-11-20 09:06:01.610423] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.831 [2024-11-20 09:06:01.610428] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.831 [2024-11-20 09:06:01.610432] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7e1690) 00:21:45.831 [2024-11-20 09:06:01.610437] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.831 [2024-11-20 09:06:01.610447] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843100, cid 0, qid 0 00:21:45.831 [2024-11-20 09:06:01.610510] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.831 [2024-11-20 09:06:01.610516] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.831 [2024-11-20 09:06:01.610519] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.831 [2024-11-20 09:06:01.610522] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843100) on tqpair=0x7e1690 00:21:45.831 [2024-11-20 09:06:01.610526] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:21:45.831 [2024-11-20 09:06:01.610530] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:21:45.831 [2024-11-20 09:06:01.610537] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:21:45.831 [2024-11-20 09:06:01.610645] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:21:45.831 [2024-11-20 09:06:01.610649] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:21:45.831 [2024-11-20 09:06:01.610656] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.831 [2024-11-20 09:06:01.610659] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.831 [2024-11-20 09:06:01.610662] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7e1690) 00:21:45.831 [2024-11-20 09:06:01.610668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.831 [2024-11-20 09:06:01.610678] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843100, cid 0, qid 0 00:21:45.831 [2024-11-20 09:06:01.610742] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.831 [2024-11-20 09:06:01.610747] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.831 [2024-11-20 09:06:01.610750] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.831 [2024-11-20 09:06:01.610753] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843100) on tqpair=0x7e1690 00:21:45.831 [2024-11-20 09:06:01.610757] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:21:45.831 [2024-11-20 09:06:01.610765] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.831 [2024-11-20 09:06:01.610769] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.831 [2024-11-20 09:06:01.610772] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7e1690) 00:21:45.831 [2024-11-20 09:06:01.610778] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.831 [2024-11-20 09:06:01.610787] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843100, cid 0, qid 0 00:21:45.831 [2024-11-20 09:06:01.610853] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.831 [2024-11-20 09:06:01.610858] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.831 [2024-11-20 09:06:01.610861] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.831 [2024-11-20 09:06:01.610865] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843100) on tqpair=0x7e1690 00:21:45.831 [2024-11-20 09:06:01.610868] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:21:45.831 [2024-11-20 09:06:01.610874] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:21:45.831 [2024-11-20 09:06:01.610881] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:21:45.831 [2024-11-20 09:06:01.610892] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:21:45.831 [2024-11-20 09:06:01.610900] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.831 [2024-11-20 09:06:01.610903] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7e1690) 00:21:45.831 [2024-11-20 09:06:01.610909] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.831 [2024-11-20 09:06:01.610919] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843100, cid 0, qid 0 00:21:45.831 [2024-11-20 09:06:01.611017] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:45.831 [2024-11-20 09:06:01.611024] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:45.831 [2024-11-20 09:06:01.611027] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:45.831 [2024-11-20 09:06:01.611030] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7e1690): datao=0, datal=4096, cccid=0 00:21:45.831 [2024-11-20 09:06:01.611034] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x843100) on tqpair(0x7e1690): expected_datao=0, payload_size=4096 00:21:45.831 [2024-11-20 09:06:01.611038] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.831 [2024-11-20 09:06:01.611049] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:45.831 [2024-11-20 09:06:01.611053] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:45.831 [2024-11-20 09:06:01.653955] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.831 [2024-11-20 09:06:01.653966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.831 [2024-11-20 09:06:01.653969] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.831 [2024-11-20 09:06:01.653973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843100) on tqpair=0x7e1690 00:21:45.831 [2024-11-20 09:06:01.653980] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:21:45.831 [2024-11-20 09:06:01.653984] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:21:45.831 [2024-11-20 09:06:01.653988] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:21:45.831 [2024-11-20 09:06:01.653995] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:21:45.831 [2024-11-20 09:06:01.653999] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:21:45.831 [2024-11-20 09:06:01.654004] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:21:45.831 [2024-11-20 09:06:01.654014] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:21:45.831 [2024-11-20 09:06:01.654020] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.831 [2024-11-20 09:06:01.654024] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.831 [2024-11-20 09:06:01.654027] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7e1690) 00:21:45.831 [2024-11-20 09:06:01.654034] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:45.831 [2024-11-20 09:06:01.654047] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843100, cid 0, qid 0 00:21:45.831 [2024-11-20 09:06:01.654128] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.831 [2024-11-20 09:06:01.654134] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.831 [2024-11-20 09:06:01.654139] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.831 [2024-11-20 09:06:01.654143] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843100) on tqpair=0x7e1690 00:21:45.832 [2024-11-20 09:06:01.654149] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.832 [2024-11-20 09:06:01.654152] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.832 [2024-11-20 09:06:01.654155] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x7e1690) 00:21:45.832 [2024-11-20 09:06:01.654161] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.832 [2024-11-20 09:06:01.654166] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.832 [2024-11-20 09:06:01.654169] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.832 [2024-11-20 09:06:01.654172] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x7e1690) 00:21:45.832 [2024-11-20 09:06:01.654178] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.832 [2024-11-20 09:06:01.654183] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.832 [2024-11-20 09:06:01.654186] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.832 [2024-11-20 09:06:01.654190] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x7e1690) 00:21:45.832 [2024-11-20 09:06:01.654194] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.832 [2024-11-20 09:06:01.654199] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.832 [2024-11-20 09:06:01.654203] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.832 [2024-11-20 09:06:01.654206] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.832 [2024-11-20 09:06:01.654211] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.832 [2024-11-20 09:06:01.654215] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:21:45.832 [2024-11-20 09:06:01.654224] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:21:45.832 [2024-11-20 09:06:01.654230] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.832 [2024-11-20 09:06:01.654233] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7e1690) 00:21:45.832 [2024-11-20 09:06:01.654239] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.832 [2024-11-20 09:06:01.654251] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843100, cid 0, qid 0 00:21:45.832 [2024-11-20 09:06:01.654255] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843280, cid 1, qid 0 00:21:45.832 [2024-11-20 09:06:01.654260] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843400, cid 2, qid 0 00:21:45.832 [2024-11-20 09:06:01.654264] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.832 [2024-11-20 09:06:01.654268] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843700, cid 4, qid 0 00:21:45.832 [2024-11-20 09:06:01.654370] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.832 [2024-11-20 09:06:01.654376] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.832 [2024-11-20 09:06:01.654379] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.832 [2024-11-20 09:06:01.654383] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843700) on tqpair=0x7e1690 00:21:45.832 [2024-11-20 09:06:01.654389] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:21:45.832 [2024-11-20 09:06:01.654395] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:21:45.832 [2024-11-20 09:06:01.654403] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:21:45.832 [2024-11-20 09:06:01.654408] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:21:45.832 [2024-11-20 09:06:01.654414] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.832 [2024-11-20 09:06:01.654417] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.832 [2024-11-20 09:06:01.654420] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7e1690) 00:21:45.832 [2024-11-20 09:06:01.654426] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:21:45.832 [2024-11-20 09:06:01.654436] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843700, cid 4, qid 0 00:21:45.832 [2024-11-20 09:06:01.654497] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.832 [2024-11-20 09:06:01.654503] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.832 [2024-11-20 09:06:01.654507] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.832 [2024-11-20 09:06:01.654510] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843700) on tqpair=0x7e1690 00:21:45.832 [2024-11-20 09:06:01.654563] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:21:45.832 [2024-11-20 09:06:01.654573] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:21:45.832 [2024-11-20 09:06:01.654580] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.832 [2024-11-20 09:06:01.654584] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7e1690) 00:21:45.832 [2024-11-20 09:06:01.654589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.832 [2024-11-20 09:06:01.654600] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843700, cid 4, qid 0 00:21:45.832 [2024-11-20 09:06:01.654673] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:45.832 [2024-11-20 09:06:01.654679] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:45.832 [2024-11-20 09:06:01.654683] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:45.832 [2024-11-20 09:06:01.654686] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7e1690): datao=0, datal=4096, cccid=4 00:21:45.832 [2024-11-20 09:06:01.654690] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x843700) on tqpair(0x7e1690): expected_datao=0, payload_size=4096 00:21:45.832 [2024-11-20 09:06:01.654694] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.832 [2024-11-20 09:06:01.654705] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:45.832 [2024-11-20 09:06:01.654708] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:45.832 [2024-11-20 09:06:01.696108] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.832 [2024-11-20 09:06:01.696118] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.832 [2024-11-20 09:06:01.696121] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.832 [2024-11-20 09:06:01.696125] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843700) on tqpair=0x7e1690 00:21:45.832 [2024-11-20 09:06:01.696134] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:21:45.832 [2024-11-20 09:06:01.696143] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:21:45.832 [2024-11-20 09:06:01.696152] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:21:45.832 [2024-11-20 09:06:01.696161] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.832 [2024-11-20 09:06:01.696165] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7e1690) 00:21:45.832 [2024-11-20 09:06:01.696172] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.832 [2024-11-20 09:06:01.696183] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843700, cid 4, qid 0 00:21:45.832 [2024-11-20 09:06:01.696267] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:45.832 [2024-11-20 09:06:01.696273] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:45.832 [2024-11-20 09:06:01.696276] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:45.832 [2024-11-20 09:06:01.696279] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7e1690): datao=0, datal=4096, cccid=4 00:21:45.832 [2024-11-20 09:06:01.696283] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x843700) on tqpair(0x7e1690): expected_datao=0, payload_size=4096 00:21:45.832 [2024-11-20 09:06:01.696287] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.832 [2024-11-20 09:06:01.696299] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:45.832 [2024-11-20 09:06:01.696303] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:45.832 [2024-11-20 09:06:01.741957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.832 [2024-11-20 09:06:01.741969] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.832 [2024-11-20 09:06:01.741972] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.832 [2024-11-20 09:06:01.741976] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843700) on tqpair=0x7e1690 00:21:45.832 [2024-11-20 09:06:01.741990] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:21:45.832 [2024-11-20 09:06:01.742000] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:21:45.832 [2024-11-20 09:06:01.742009] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.832 [2024-11-20 09:06:01.742013] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7e1690) 00:21:45.832 [2024-11-20 09:06:01.742019] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.832 [2024-11-20 09:06:01.742032] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843700, cid 4, qid 0 00:21:45.832 [2024-11-20 09:06:01.742125] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:45.832 [2024-11-20 09:06:01.742131] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:45.832 [2024-11-20 09:06:01.742134] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:45.832 [2024-11-20 09:06:01.742137] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7e1690): datao=0, datal=4096, cccid=4 00:21:45.832 [2024-11-20 09:06:01.742141] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x843700) on tqpair(0x7e1690): expected_datao=0, payload_size=4096 00:21:45.832 [2024-11-20 09:06:01.742145] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.832 [2024-11-20 09:06:01.742157] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:45.832 [2024-11-20 09:06:01.742162] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:45.832 [2024-11-20 09:06:01.785957] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.832 [2024-11-20 09:06:01.785966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.833 [2024-11-20 09:06:01.785970] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.833 [2024-11-20 09:06:01.785973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843700) on tqpair=0x7e1690 00:21:45.833 [2024-11-20 09:06:01.785981] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:21:45.833 [2024-11-20 09:06:01.785992] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:21:45.833 [2024-11-20 09:06:01.786000] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:21:45.833 [2024-11-20 09:06:01.786006] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:21:45.833 [2024-11-20 09:06:01.786010] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:21:45.833 [2024-11-20 09:06:01.786015] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:21:45.833 [2024-11-20 09:06:01.786019] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:21:45.833 [2024-11-20 09:06:01.786023] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:21:45.833 [2024-11-20 09:06:01.786028] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:21:45.833 [2024-11-20 09:06:01.786040] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.833 [2024-11-20 09:06:01.786044] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7e1690) 00:21:45.833 [2024-11-20 09:06:01.786050] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.833 [2024-11-20 09:06:01.786056] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.833 [2024-11-20 09:06:01.786059] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.833 [2024-11-20 09:06:01.786062] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7e1690) 00:21:45.833 [2024-11-20 09:06:01.786068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:21:45.833 [2024-11-20 09:06:01.786082] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843700, cid 4, qid 0 00:21:45.833 [2024-11-20 09:06:01.786087] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843880, cid 5, qid 0 00:21:45.833 [2024-11-20 09:06:01.786164] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.833 [2024-11-20 09:06:01.786170] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.833 [2024-11-20 09:06:01.786173] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.833 [2024-11-20 09:06:01.786176] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843700) on tqpair=0x7e1690 00:21:45.833 [2024-11-20 09:06:01.786182] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.833 [2024-11-20 09:06:01.786187] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.833 [2024-11-20 09:06:01.786190] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.833 [2024-11-20 09:06:01.786193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843880) on tqpair=0x7e1690 00:21:45.833 [2024-11-20 09:06:01.786201] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.833 [2024-11-20 09:06:01.786204] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7e1690) 00:21:45.833 [2024-11-20 09:06:01.786210] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.833 [2024-11-20 09:06:01.786219] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843880, cid 5, qid 0 00:21:45.833 [2024-11-20 09:06:01.786288] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.833 [2024-11-20 09:06:01.786293] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.833 [2024-11-20 09:06:01.786296] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.833 [2024-11-20 09:06:01.786302] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843880) on tqpair=0x7e1690 00:21:45.833 [2024-11-20 09:06:01.786309] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.833 [2024-11-20 09:06:01.786313] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7e1690) 00:21:45.833 [2024-11-20 09:06:01.786318] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.833 [2024-11-20 09:06:01.786327] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843880, cid 5, qid 0 00:21:45.833 [2024-11-20 09:06:01.786407] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.833 [2024-11-20 09:06:01.786413] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.833 [2024-11-20 09:06:01.786416] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.833 [2024-11-20 09:06:01.786419] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843880) on tqpair=0x7e1690 00:21:45.833 [2024-11-20 09:06:01.786427] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.833 [2024-11-20 09:06:01.786430] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7e1690) 00:21:45.833 [2024-11-20 09:06:01.786436] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.833 [2024-11-20 09:06:01.786445] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843880, cid 5, qid 0 00:21:45.833 [2024-11-20 09:06:01.786512] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.833 [2024-11-20 09:06:01.786518] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.833 [2024-11-20 09:06:01.786521] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.833 [2024-11-20 09:06:01.786524] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843880) on tqpair=0x7e1690 00:21:45.833 [2024-11-20 09:06:01.786537] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.833 [2024-11-20 09:06:01.786541] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x7e1690) 00:21:45.833 [2024-11-20 09:06:01.786546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.833 [2024-11-20 09:06:01.786552] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.833 [2024-11-20 09:06:01.786555] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x7e1690) 00:21:45.833 [2024-11-20 09:06:01.786561] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.833 [2024-11-20 09:06:01.786567] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.833 [2024-11-20 09:06:01.786570] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x7e1690) 00:21:45.833 [2024-11-20 09:06:01.786575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.833 [2024-11-20 09:06:01.786582] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.833 [2024-11-20 09:06:01.786585] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x7e1690) 00:21:45.833 [2024-11-20 09:06:01.786590] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.833 [2024-11-20 09:06:01.786601] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843880, cid 5, qid 0 00:21:45.833 [2024-11-20 09:06:01.786605] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843700, cid 4, qid 0 00:21:45.833 [2024-11-20 09:06:01.786609] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843a00, cid 6, qid 0 00:21:45.833 [2024-11-20 09:06:01.786613] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843b80, cid 7, qid 0 00:21:45.833 [2024-11-20 09:06:01.786751] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:45.833 [2024-11-20 09:06:01.786758] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:45.833 [2024-11-20 09:06:01.786761] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:45.833 [2024-11-20 09:06:01.786764] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7e1690): datao=0, datal=8192, cccid=5 00:21:45.833 [2024-11-20 09:06:01.786768] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x843880) on tqpair(0x7e1690): expected_datao=0, payload_size=8192 00:21:45.833 [2024-11-20 09:06:01.786772] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.833 [2024-11-20 09:06:01.786801] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:45.833 [2024-11-20 09:06:01.786805] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:45.833 [2024-11-20 09:06:01.786810] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:45.833 [2024-11-20 09:06:01.786815] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:45.833 [2024-11-20 09:06:01.786818] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:45.833 [2024-11-20 09:06:01.786821] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7e1690): datao=0, datal=512, cccid=4 00:21:45.833 [2024-11-20 09:06:01.786825] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x843700) on tqpair(0x7e1690): expected_datao=0, payload_size=512 00:21:45.833 [2024-11-20 09:06:01.786829] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.833 [2024-11-20 09:06:01.786834] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:45.833 [2024-11-20 09:06:01.786837] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:45.833 [2024-11-20 09:06:01.786841] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:45.833 [2024-11-20 09:06:01.786846] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:45.833 [2024-11-20 09:06:01.786849] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:45.833 [2024-11-20 09:06:01.786852] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7e1690): datao=0, datal=512, cccid=6 00:21:45.833 [2024-11-20 09:06:01.786856] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x843a00) on tqpair(0x7e1690): expected_datao=0, payload_size=512 00:21:45.833 [2024-11-20 09:06:01.786860] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.833 [2024-11-20 09:06:01.786865] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:45.833 [2024-11-20 09:06:01.786868] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:45.833 [2024-11-20 09:06:01.786873] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:21:45.833 [2024-11-20 09:06:01.786878] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:21:45.833 [2024-11-20 09:06:01.786881] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:21:45.833 [2024-11-20 09:06:01.786884] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x7e1690): datao=0, datal=4096, cccid=7 00:21:45.834 [2024-11-20 09:06:01.786888] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x843b80) on tqpair(0x7e1690): expected_datao=0, payload_size=4096 00:21:45.834 [2024-11-20 09:06:01.786893] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.834 [2024-11-20 09:06:01.786899] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:21:45.834 [2024-11-20 09:06:01.786902] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:21:45.834 [2024-11-20 09:06:01.786910] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.834 [2024-11-20 09:06:01.786915] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.834 [2024-11-20 09:06:01.786918] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.834 [2024-11-20 09:06:01.786921] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843880) on tqpair=0x7e1690 00:21:45.834 [2024-11-20 09:06:01.786932] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.834 [2024-11-20 09:06:01.786938] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.834 [2024-11-20 09:06:01.786942] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.834 [2024-11-20 09:06:01.786952] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843700) on tqpair=0x7e1690 00:21:45.834 [2024-11-20 09:06:01.786961] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.834 [2024-11-20 09:06:01.786966] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.834 [2024-11-20 09:06:01.786969] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.834 [2024-11-20 09:06:01.786973] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843a00) on tqpair=0x7e1690 00:21:45.834 [2024-11-20 09:06:01.786980] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.834 [2024-11-20 09:06:01.786986] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.834 [2024-11-20 09:06:01.786989] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.834 [2024-11-20 09:06:01.786992] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843b80) on tqpair=0x7e1690 00:21:45.834 ===================================================== 00:21:45.834 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:45.834 ===================================================== 00:21:45.834 Controller Capabilities/Features 00:21:45.834 ================================ 00:21:45.834 Vendor ID: 8086 00:21:45.834 Subsystem Vendor ID: 8086 00:21:45.834 Serial Number: SPDK00000000000001 00:21:45.834 Model Number: SPDK bdev Controller 00:21:45.834 Firmware Version: 25.01 00:21:45.834 Recommended Arb Burst: 6 00:21:45.834 IEEE OUI Identifier: e4 d2 5c 00:21:45.834 Multi-path I/O 00:21:45.834 May have multiple subsystem ports: Yes 00:21:45.834 May have multiple controllers: Yes 00:21:45.834 Associated with SR-IOV VF: No 00:21:45.834 Max Data Transfer Size: 131072 00:21:45.834 Max Number of Namespaces: 32 00:21:45.834 Max Number of I/O Queues: 127 00:21:45.834 NVMe Specification Version (VS): 1.3 00:21:45.834 NVMe Specification Version (Identify): 1.3 00:21:45.834 Maximum Queue Entries: 128 00:21:45.834 Contiguous Queues Required: Yes 00:21:45.834 Arbitration Mechanisms Supported 00:21:45.834 Weighted Round Robin: Not Supported 00:21:45.834 Vendor Specific: Not Supported 00:21:45.834 Reset Timeout: 15000 ms 00:21:45.834 Doorbell Stride: 4 bytes 00:21:45.834 NVM Subsystem Reset: Not Supported 00:21:45.834 Command Sets Supported 00:21:45.834 NVM Command Set: Supported 00:21:45.834 Boot Partition: Not Supported 00:21:45.834 Memory Page Size Minimum: 4096 bytes 00:21:45.834 Memory Page Size Maximum: 4096 bytes 00:21:45.834 Persistent Memory Region: Not Supported 00:21:45.834 Optional Asynchronous Events Supported 00:21:45.834 Namespace Attribute Notices: Supported 00:21:45.834 Firmware Activation Notices: Not Supported 00:21:45.834 ANA Change Notices: Not Supported 00:21:45.834 PLE Aggregate Log Change Notices: Not Supported 00:21:45.834 LBA Status Info Alert Notices: Not Supported 00:21:45.834 EGE Aggregate Log Change Notices: Not Supported 00:21:45.834 Normal NVM Subsystem Shutdown event: Not Supported 00:21:45.834 Zone Descriptor Change Notices: Not Supported 00:21:45.834 Discovery Log Change Notices: Not Supported 00:21:45.834 Controller Attributes 00:21:45.834 128-bit Host Identifier: Supported 00:21:45.834 Non-Operational Permissive Mode: Not Supported 00:21:45.834 NVM Sets: Not Supported 00:21:45.834 Read Recovery Levels: Not Supported 00:21:45.834 Endurance Groups: Not Supported 00:21:45.834 Predictable Latency Mode: Not Supported 00:21:45.834 Traffic Based Keep ALive: Not Supported 00:21:45.834 Namespace Granularity: Not Supported 00:21:45.834 SQ Associations: Not Supported 00:21:45.834 UUID List: Not Supported 00:21:45.834 Multi-Domain Subsystem: Not Supported 00:21:45.834 Fixed Capacity Management: Not Supported 00:21:45.834 Variable Capacity Management: Not Supported 00:21:45.834 Delete Endurance Group: Not Supported 00:21:45.834 Delete NVM Set: Not Supported 00:21:45.834 Extended LBA Formats Supported: Not Supported 00:21:45.834 Flexible Data Placement Supported: Not Supported 00:21:45.834 00:21:45.834 Controller Memory Buffer Support 00:21:45.834 ================================ 00:21:45.834 Supported: No 00:21:45.834 00:21:45.834 Persistent Memory Region Support 00:21:45.834 ================================ 00:21:45.834 Supported: No 00:21:45.834 00:21:45.834 Admin Command Set Attributes 00:21:45.834 ============================ 00:21:45.834 Security Send/Receive: Not Supported 00:21:45.834 Format NVM: Not Supported 00:21:45.834 Firmware Activate/Download: Not Supported 00:21:45.834 Namespace Management: Not Supported 00:21:45.834 Device Self-Test: Not Supported 00:21:45.834 Directives: Not Supported 00:21:45.834 NVMe-MI: Not Supported 00:21:45.834 Virtualization Management: Not Supported 00:21:45.834 Doorbell Buffer Config: Not Supported 00:21:45.834 Get LBA Status Capability: Not Supported 00:21:45.834 Command & Feature Lockdown Capability: Not Supported 00:21:45.834 Abort Command Limit: 4 00:21:45.834 Async Event Request Limit: 4 00:21:45.834 Number of Firmware Slots: N/A 00:21:45.834 Firmware Slot 1 Read-Only: N/A 00:21:45.834 Firmware Activation Without Reset: N/A 00:21:45.834 Multiple Update Detection Support: N/A 00:21:45.834 Firmware Update Granularity: No Information Provided 00:21:45.834 Per-Namespace SMART Log: No 00:21:45.834 Asymmetric Namespace Access Log Page: Not Supported 00:21:45.834 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:21:45.834 Command Effects Log Page: Supported 00:21:45.834 Get Log Page Extended Data: Supported 00:21:45.834 Telemetry Log Pages: Not Supported 00:21:45.834 Persistent Event Log Pages: Not Supported 00:21:45.834 Supported Log Pages Log Page: May Support 00:21:45.834 Commands Supported & Effects Log Page: Not Supported 00:21:45.834 Feature Identifiers & Effects Log Page:May Support 00:21:45.834 NVMe-MI Commands & Effects Log Page: May Support 00:21:45.834 Data Area 4 for Telemetry Log: Not Supported 00:21:45.834 Error Log Page Entries Supported: 128 00:21:45.834 Keep Alive: Supported 00:21:45.834 Keep Alive Granularity: 10000 ms 00:21:45.834 00:21:45.834 NVM Command Set Attributes 00:21:45.834 ========================== 00:21:45.834 Submission Queue Entry Size 00:21:45.834 Max: 64 00:21:45.834 Min: 64 00:21:45.834 Completion Queue Entry Size 00:21:45.834 Max: 16 00:21:45.834 Min: 16 00:21:45.834 Number of Namespaces: 32 00:21:45.834 Compare Command: Supported 00:21:45.834 Write Uncorrectable Command: Not Supported 00:21:45.834 Dataset Management Command: Supported 00:21:45.834 Write Zeroes Command: Supported 00:21:45.834 Set Features Save Field: Not Supported 00:21:45.834 Reservations: Supported 00:21:45.834 Timestamp: Not Supported 00:21:45.834 Copy: Supported 00:21:45.834 Volatile Write Cache: Present 00:21:45.834 Atomic Write Unit (Normal): 1 00:21:45.834 Atomic Write Unit (PFail): 1 00:21:45.834 Atomic Compare & Write Unit: 1 00:21:45.834 Fused Compare & Write: Supported 00:21:45.834 Scatter-Gather List 00:21:45.834 SGL Command Set: Supported 00:21:45.834 SGL Keyed: Supported 00:21:45.834 SGL Bit Bucket Descriptor: Not Supported 00:21:45.834 SGL Metadata Pointer: Not Supported 00:21:45.834 Oversized SGL: Not Supported 00:21:45.834 SGL Metadata Address: Not Supported 00:21:45.834 SGL Offset: Supported 00:21:45.834 Transport SGL Data Block: Not Supported 00:21:45.834 Replay Protected Memory Block: Not Supported 00:21:45.834 00:21:45.834 Firmware Slot Information 00:21:45.834 ========================= 00:21:45.834 Active slot: 1 00:21:45.834 Slot 1 Firmware Revision: 25.01 00:21:45.834 00:21:45.834 00:21:45.834 Commands Supported and Effects 00:21:45.834 ============================== 00:21:45.834 Admin Commands 00:21:45.835 -------------- 00:21:45.835 Get Log Page (02h): Supported 00:21:45.835 Identify (06h): Supported 00:21:45.835 Abort (08h): Supported 00:21:45.835 Set Features (09h): Supported 00:21:45.835 Get Features (0Ah): Supported 00:21:45.835 Asynchronous Event Request (0Ch): Supported 00:21:45.835 Keep Alive (18h): Supported 00:21:45.835 I/O Commands 00:21:45.835 ------------ 00:21:45.835 Flush (00h): Supported LBA-Change 00:21:45.835 Write (01h): Supported LBA-Change 00:21:45.835 Read (02h): Supported 00:21:45.835 Compare (05h): Supported 00:21:45.835 Write Zeroes (08h): Supported LBA-Change 00:21:45.835 Dataset Management (09h): Supported LBA-Change 00:21:45.835 Copy (19h): Supported LBA-Change 00:21:45.835 00:21:45.835 Error Log 00:21:45.835 ========= 00:21:45.835 00:21:45.835 Arbitration 00:21:45.835 =========== 00:21:45.835 Arbitration Burst: 1 00:21:45.835 00:21:45.835 Power Management 00:21:45.835 ================ 00:21:45.835 Number of Power States: 1 00:21:45.835 Current Power State: Power State #0 00:21:45.835 Power State #0: 00:21:45.835 Max Power: 0.00 W 00:21:45.835 Non-Operational State: Operational 00:21:45.835 Entry Latency: Not Reported 00:21:45.835 Exit Latency: Not Reported 00:21:45.835 Relative Read Throughput: 0 00:21:45.835 Relative Read Latency: 0 00:21:45.835 Relative Write Throughput: 0 00:21:45.835 Relative Write Latency: 0 00:21:45.835 Idle Power: Not Reported 00:21:45.835 Active Power: Not Reported 00:21:45.835 Non-Operational Permissive Mode: Not Supported 00:21:45.835 00:21:45.835 Health Information 00:21:45.835 ================== 00:21:45.835 Critical Warnings: 00:21:45.835 Available Spare Space: OK 00:21:45.835 Temperature: OK 00:21:45.835 Device Reliability: OK 00:21:45.835 Read Only: No 00:21:45.835 Volatile Memory Backup: OK 00:21:45.835 Current Temperature: 0 Kelvin (-273 Celsius) 00:21:45.835 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:21:45.835 Available Spare: 0% 00:21:45.835 Available Spare Threshold: 0% 00:21:45.835 Life Percentage Used:[2024-11-20 09:06:01.787079] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.835 [2024-11-20 09:06:01.787084] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x7e1690) 00:21:45.835 [2024-11-20 09:06:01.787092] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.835 [2024-11-20 09:06:01.787104] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843b80, cid 7, qid 0 00:21:45.835 [2024-11-20 09:06:01.787176] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.835 [2024-11-20 09:06:01.787183] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.835 [2024-11-20 09:06:01.787187] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.835 [2024-11-20 09:06:01.787191] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843b80) on tqpair=0x7e1690 00:21:45.835 [2024-11-20 09:06:01.787218] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:21:45.835 [2024-11-20 09:06:01.787228] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843100) on tqpair=0x7e1690 00:21:45.835 [2024-11-20 09:06:01.787233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.835 [2024-11-20 09:06:01.787238] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843280) on tqpair=0x7e1690 00:21:45.835 [2024-11-20 09:06:01.787242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.835 [2024-11-20 09:06:01.787246] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843400) on tqpair=0x7e1690 00:21:45.835 [2024-11-20 09:06:01.787252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.835 [2024-11-20 09:06:01.787257] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.835 [2024-11-20 09:06:01.787261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:45.835 [2024-11-20 09:06:01.787268] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.835 [2024-11-20 09:06:01.787271] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.835 [2024-11-20 09:06:01.787275] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.835 [2024-11-20 09:06:01.787280] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.835 [2024-11-20 09:06:01.787292] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.835 [2024-11-20 09:06:01.787354] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.835 [2024-11-20 09:06:01.787361] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.835 [2024-11-20 09:06:01.787364] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.835 [2024-11-20 09:06:01.787367] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.835 [2024-11-20 09:06:01.787378] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.835 [2024-11-20 09:06:01.787383] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.835 [2024-11-20 09:06:01.787387] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.835 [2024-11-20 09:06:01.787393] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.835 [2024-11-20 09:06:01.787405] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.835 [2024-11-20 09:06:01.787477] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.835 [2024-11-20 09:06:01.787483] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.835 [2024-11-20 09:06:01.787486] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.835 [2024-11-20 09:06:01.787490] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.835 [2024-11-20 09:06:01.787496] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:21:45.835 [2024-11-20 09:06:01.787500] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:21:45.835 [2024-11-20 09:06:01.787508] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.835 [2024-11-20 09:06:01.787512] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.835 [2024-11-20 09:06:01.787515] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.835 [2024-11-20 09:06:01.787521] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.835 [2024-11-20 09:06:01.787531] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.835 [2024-11-20 09:06:01.787597] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.835 [2024-11-20 09:06:01.787603] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.835 [2024-11-20 09:06:01.787606] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.835 [2024-11-20 09:06:01.787610] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.835 [2024-11-20 09:06:01.787618] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.835 [2024-11-20 09:06:01.787621] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.835 [2024-11-20 09:06:01.787625] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.835 [2024-11-20 09:06:01.787630] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.835 [2024-11-20 09:06:01.787639] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.835 [2024-11-20 09:06:01.787708] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.835 [2024-11-20 09:06:01.787714] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.835 [2024-11-20 09:06:01.787717] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.835 [2024-11-20 09:06:01.787720] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.835 [2024-11-20 09:06:01.787729] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.835 [2024-11-20 09:06:01.787732] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.835 [2024-11-20 09:06:01.787735] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.835 [2024-11-20 09:06:01.787741] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.836 [2024-11-20 09:06:01.787750] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.836 [2024-11-20 09:06:01.787814] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.836 [2024-11-20 09:06:01.787819] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.836 [2024-11-20 09:06:01.787824] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.787828] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.836 [2024-11-20 09:06:01.787836] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.787839] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.787842] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.836 [2024-11-20 09:06:01.787848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.836 [2024-11-20 09:06:01.787857] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.836 [2024-11-20 09:06:01.787923] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.836 [2024-11-20 09:06:01.787929] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.836 [2024-11-20 09:06:01.787932] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.787935] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.836 [2024-11-20 09:06:01.787943] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.787951] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.787955] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.836 [2024-11-20 09:06:01.787961] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.836 [2024-11-20 09:06:01.787970] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.836 [2024-11-20 09:06:01.788034] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.836 [2024-11-20 09:06:01.788040] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.836 [2024-11-20 09:06:01.788043] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.788046] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.836 [2024-11-20 09:06:01.788055] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.788058] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.788061] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.836 [2024-11-20 09:06:01.788067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.836 [2024-11-20 09:06:01.788076] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.836 [2024-11-20 09:06:01.788144] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.836 [2024-11-20 09:06:01.788149] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.836 [2024-11-20 09:06:01.788152] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.788156] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.836 [2024-11-20 09:06:01.788164] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.788168] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.788171] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.836 [2024-11-20 09:06:01.788177] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.836 [2024-11-20 09:06:01.788186] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.836 [2024-11-20 09:06:01.788250] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.836 [2024-11-20 09:06:01.788256] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.836 [2024-11-20 09:06:01.788259] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.788264] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.836 [2024-11-20 09:06:01.788272] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.788276] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.788279] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.836 [2024-11-20 09:06:01.788284] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.836 [2024-11-20 09:06:01.788294] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.836 [2024-11-20 09:06:01.788361] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.836 [2024-11-20 09:06:01.788367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.836 [2024-11-20 09:06:01.788370] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.788373] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.836 [2024-11-20 09:06:01.788382] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.788385] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.788388] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.836 [2024-11-20 09:06:01.788394] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.836 [2024-11-20 09:06:01.788403] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.836 [2024-11-20 09:06:01.788471] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.836 [2024-11-20 09:06:01.788476] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.836 [2024-11-20 09:06:01.788479] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.788483] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.836 [2024-11-20 09:06:01.788491] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.788495] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.788498] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.836 [2024-11-20 09:06:01.788504] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.836 [2024-11-20 09:06:01.788513] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.836 [2024-11-20 09:06:01.788578] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.836 [2024-11-20 09:06:01.788584] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.836 [2024-11-20 09:06:01.788588] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.788591] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.836 [2024-11-20 09:06:01.788599] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.788603] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.788606] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.836 [2024-11-20 09:06:01.788611] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.836 [2024-11-20 09:06:01.788621] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.836 [2024-11-20 09:06:01.788688] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.836 [2024-11-20 09:06:01.788694] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.836 [2024-11-20 09:06:01.788696] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.788700] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.836 [2024-11-20 09:06:01.788711] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.788715] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.788718] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.836 [2024-11-20 09:06:01.788723] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.836 [2024-11-20 09:06:01.788733] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.836 [2024-11-20 09:06:01.788796] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.836 [2024-11-20 09:06:01.788802] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.836 [2024-11-20 09:06:01.788805] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.788808] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.836 [2024-11-20 09:06:01.788817] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.788820] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.788823] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.836 [2024-11-20 09:06:01.788829] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.836 [2024-11-20 09:06:01.788838] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.836 [2024-11-20 09:06:01.788905] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.836 [2024-11-20 09:06:01.788911] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.836 [2024-11-20 09:06:01.788914] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.788917] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.836 [2024-11-20 09:06:01.788925] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.788929] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.836 [2024-11-20 09:06:01.788932] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.836 [2024-11-20 09:06:01.788938] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.836 [2024-11-20 09:06:01.788951] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.836 [2024-11-20 09:06:01.789014] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.836 [2024-11-20 09:06:01.789020] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.837 [2024-11-20 09:06:01.789023] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.789026] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.837 [2024-11-20 09:06:01.789034] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.789038] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.789041] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.837 [2024-11-20 09:06:01.789046] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.837 [2024-11-20 09:06:01.789056] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.837 [2024-11-20 09:06:01.789123] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.837 [2024-11-20 09:06:01.789129] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.837 [2024-11-20 09:06:01.789132] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.789135] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.837 [2024-11-20 09:06:01.789144] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.789149] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.789152] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.837 [2024-11-20 09:06:01.789158] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.837 [2024-11-20 09:06:01.789169] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.837 [2024-11-20 09:06:01.789229] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.837 [2024-11-20 09:06:01.789234] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.837 [2024-11-20 09:06:01.789237] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.789240] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.837 [2024-11-20 09:06:01.789249] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.789252] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.789255] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.837 [2024-11-20 09:06:01.789261] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.837 [2024-11-20 09:06:01.789270] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.837 [2024-11-20 09:06:01.789338] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.837 [2024-11-20 09:06:01.789344] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.837 [2024-11-20 09:06:01.789347] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.789350] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.837 [2024-11-20 09:06:01.789359] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.789363] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.789366] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.837 [2024-11-20 09:06:01.789372] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.837 [2024-11-20 09:06:01.789381] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.837 [2024-11-20 09:06:01.789441] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.837 [2024-11-20 09:06:01.789447] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.837 [2024-11-20 09:06:01.789450] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.789453] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.837 [2024-11-20 09:06:01.789461] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.789464] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.789468] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.837 [2024-11-20 09:06:01.789473] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.837 [2024-11-20 09:06:01.789483] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.837 [2024-11-20 09:06:01.789545] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.837 [2024-11-20 09:06:01.789551] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.837 [2024-11-20 09:06:01.789554] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.789557] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.837 [2024-11-20 09:06:01.789565] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.789569] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.789572] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.837 [2024-11-20 09:06:01.789579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.837 [2024-11-20 09:06:01.789589] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.837 [2024-11-20 09:06:01.789651] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.837 [2024-11-20 09:06:01.789657] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.837 [2024-11-20 09:06:01.789660] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.789663] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.837 [2024-11-20 09:06:01.789672] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.789676] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.789679] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.837 [2024-11-20 09:06:01.789684] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.837 [2024-11-20 09:06:01.789693] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.837 [2024-11-20 09:06:01.789760] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.837 [2024-11-20 09:06:01.789765] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.837 [2024-11-20 09:06:01.789768] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.789772] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.837 [2024-11-20 09:06:01.789780] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.789784] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.789787] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.837 [2024-11-20 09:06:01.789793] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.837 [2024-11-20 09:06:01.789802] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.837 [2024-11-20 09:06:01.789863] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.837 [2024-11-20 09:06:01.789869] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.837 [2024-11-20 09:06:01.789872] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.789875] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.837 [2024-11-20 09:06:01.789883] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.789887] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.789890] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.837 [2024-11-20 09:06:01.789895] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.837 [2024-11-20 09:06:01.789905] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.837 [2024-11-20 09:06:01.789971] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.837 [2024-11-20 09:06:01.789977] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.837 [2024-11-20 09:06:01.789980] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.789983] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.837 [2024-11-20 09:06:01.789992] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.789995] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.789998] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.837 [2024-11-20 09:06:01.790004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.837 [2024-11-20 09:06:01.790016] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.837 [2024-11-20 09:06:01.790078] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.837 [2024-11-20 09:06:01.790084] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.837 [2024-11-20 09:06:01.790087] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.790090] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.837 [2024-11-20 09:06:01.790099] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.790103] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.790106] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.837 [2024-11-20 09:06:01.790111] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.837 [2024-11-20 09:06:01.790122] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.837 [2024-11-20 09:06:01.790183] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.837 [2024-11-20 09:06:01.790189] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.837 [2024-11-20 09:06:01.790192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.790195] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.837 [2024-11-20 09:06:01.790204] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.790208] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.837 [2024-11-20 09:06:01.790211] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.837 [2024-11-20 09:06:01.790216] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.837 [2024-11-20 09:06:01.790226] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.838 [2024-11-20 09:06:01.790285] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.838 [2024-11-20 09:06:01.790291] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.838 [2024-11-20 09:06:01.790294] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.838 [2024-11-20 09:06:01.790297] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.838 [2024-11-20 09:06:01.790305] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.838 [2024-11-20 09:06:01.790309] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.838 [2024-11-20 09:06:01.790312] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.838 [2024-11-20 09:06:01.790318] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.838 [2024-11-20 09:06:01.790326] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.838 [2024-11-20 09:06:01.790396] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.838 [2024-11-20 09:06:01.790401] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.838 [2024-11-20 09:06:01.790404] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.838 [2024-11-20 09:06:01.790408] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.838 [2024-11-20 09:06:01.790417] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.838 [2024-11-20 09:06:01.790420] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.838 [2024-11-20 09:06:01.790423] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.838 [2024-11-20 09:06:01.790429] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.838 [2024-11-20 09:06:01.790440] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.838 [2024-11-20 09:06:01.790500] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.838 [2024-11-20 09:06:01.790506] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.838 [2024-11-20 09:06:01.790509] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.838 [2024-11-20 09:06:01.790513] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.838 [2024-11-20 09:06:01.790521] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.838 [2024-11-20 09:06:01.790524] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.838 [2024-11-20 09:06:01.790527] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.838 [2024-11-20 09:06:01.790533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.838 [2024-11-20 09:06:01.790542] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.838 [2024-11-20 09:06:01.790609] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.838 [2024-11-20 09:06:01.790615] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.838 [2024-11-20 09:06:01.790618] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.838 [2024-11-20 09:06:01.790621] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.838 [2024-11-20 09:06:01.790630] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.838 [2024-11-20 09:06:01.790633] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.838 [2024-11-20 09:06:01.790636] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.838 [2024-11-20 09:06:01.790642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.838 [2024-11-20 09:06:01.790652] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.838 [2024-11-20 09:06:01.790717] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.838 [2024-11-20 09:06:01.790722] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.838 [2024-11-20 09:06:01.790726] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.838 [2024-11-20 09:06:01.790729] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.838 [2024-11-20 09:06:01.790737] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.838 [2024-11-20 09:06:01.790741] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.838 [2024-11-20 09:06:01.790744] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.838 [2024-11-20 09:06:01.790749] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.838 [2024-11-20 09:06:01.790759] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.838 [2024-11-20 09:06:01.790821] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.838 [2024-11-20 09:06:01.790827] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.838 [2024-11-20 09:06:01.790830] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.838 [2024-11-20 09:06:01.790833] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.838 [2024-11-20 09:06:01.790841] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.838 [2024-11-20 09:06:01.790845] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.838 [2024-11-20 09:06:01.790848] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.838 [2024-11-20 09:06:01.790853] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.838 [2024-11-20 09:06:01.790862] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.838 [2024-11-20 09:06:01.790931] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.838 [2024-11-20 09:06:01.790936] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.838 [2024-11-20 09:06:01.790939] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.838 [2024-11-20 09:06:01.790943] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.838 [2024-11-20 09:06:01.794956] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:21:45.838 [2024-11-20 09:06:01.794963] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:21:45.838 [2024-11-20 09:06:01.794966] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x7e1690) 00:21:45.838 [2024-11-20 09:06:01.794972] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:45.838 [2024-11-20 09:06:01.794983] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x843580, cid 3, qid 0 00:21:45.838 [2024-11-20 09:06:01.795050] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:21:45.838 [2024-11-20 09:06:01.795056] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:21:45.838 [2024-11-20 09:06:01.795059] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:21:45.838 [2024-11-20 09:06:01.795062] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x843580) on tqpair=0x7e1690 00:21:45.838 [2024-11-20 09:06:01.795069] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 7 milliseconds 00:21:45.838 0% 00:21:45.838 Data Units Read: 0 00:21:45.838 Data Units Written: 0 00:21:45.838 Host Read Commands: 0 00:21:45.838 Host Write Commands: 0 00:21:45.838 Controller Busy Time: 0 minutes 00:21:45.838 Power Cycles: 0 00:21:45.838 Power On Hours: 0 hours 00:21:45.838 Unsafe Shutdowns: 0 00:21:45.838 Unrecoverable Media Errors: 0 00:21:45.838 Lifetime Error Log Entries: 0 00:21:45.838 Warning Temperature Time: 0 minutes 00:21:45.838 Critical Temperature Time: 0 minutes 00:21:45.838 00:21:45.838 Number of Queues 00:21:45.838 ================ 00:21:45.838 Number of I/O Submission Queues: 127 00:21:45.838 Number of I/O Completion Queues: 127 00:21:45.838 00:21:45.838 Active Namespaces 00:21:45.838 ================= 00:21:45.838 Namespace ID:1 00:21:45.838 Error Recovery Timeout: Unlimited 00:21:45.838 Command Set Identifier: NVM (00h) 00:21:45.838 Deallocate: Supported 00:21:45.838 Deallocated/Unwritten Error: Not Supported 00:21:45.838 Deallocated Read Value: Unknown 00:21:45.838 Deallocate in Write Zeroes: Not Supported 00:21:45.838 Deallocated Guard Field: 0xFFFF 00:21:45.838 Flush: Supported 00:21:45.838 Reservation: Supported 00:21:45.838 Namespace Sharing Capabilities: Multiple Controllers 00:21:45.838 Size (in LBAs): 131072 (0GiB) 00:21:45.838 Capacity (in LBAs): 131072 (0GiB) 00:21:45.838 Utilization (in LBAs): 131072 (0GiB) 00:21:45.838 NGUID: ABCDEF0123456789ABCDEF0123456789 00:21:45.838 EUI64: ABCDEF0123456789 00:21:45.838 UUID: 8997ddb7-7bdb-49a5-9dd2-00c64e86a0c0 00:21:45.838 Thin Provisioning: Not Supported 00:21:45.838 Per-NS Atomic Units: Yes 00:21:45.838 Atomic Boundary Size (Normal): 0 00:21:45.838 Atomic Boundary Size (PFail): 0 00:21:45.838 Atomic Boundary Offset: 0 00:21:45.838 Maximum Single Source Range Length: 65535 00:21:45.838 Maximum Copy Length: 65535 00:21:45.838 Maximum Source Range Count: 1 00:21:45.838 NGUID/EUI64 Never Reused: No 00:21:45.838 Namespace Write Protected: No 00:21:45.838 Number of LBA Formats: 1 00:21:45.838 Current LBA Format: LBA Format #00 00:21:45.838 LBA Format #00: Data Size: 512 Metadata Size: 0 00:21:45.838 00:21:45.838 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:21:45.838 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:45.838 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:45.838 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:45.838 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:45.838 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:21:45.838 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:21:45.838 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # nvmfcleanup 00:21:45.838 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@99 -- # sync 00:21:45.838 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:21:45.838 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # set +e 00:21:45.839 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # for i in {1..20} 00:21:45.839 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:21:45.839 rmmod nvme_tcp 00:21:45.839 rmmod nvme_fabrics 00:21:45.839 rmmod nvme_keyring 00:21:46.098 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:21:46.098 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # set -e 00:21:46.098 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # return 0 00:21:46.098 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # '[' -n 2413449 ']' 00:21:46.098 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@337 -- # killprocess 2413449 00:21:46.098 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 2413449 ']' 00:21:46.098 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 2413449 00:21:46.098 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:21:46.098 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:46.098 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2413449 00:21:46.098 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:46.098 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:46.098 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2413449' 00:21:46.098 killing process with pid 2413449 00:21:46.098 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 2413449 00:21:46.098 09:06:01 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 2413449 00:21:46.098 09:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:21:46.098 09:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # nvmf_fini 00:21:46.098 09:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@264 -- # local dev 00:21:46.098 09:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@267 -- # remove_target_ns 00:21:46.098 09:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:46.098 09:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:46.098 09:06:02 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@268 -- # delete_main_bridge 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@130 -- # return 0 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@41 -- # _dev=0 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@41 -- # dev_map=() 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@284 -- # iptr 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@542 -- # iptables-save 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@542 -- # iptables-restore 00:21:48.636 00:21:48.636 real 0m9.632s 00:21:48.636 user 0m6.277s 00:21:48.636 sys 0m4.889s 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:21:48.636 ************************************ 00:21:48.636 END TEST nvmf_identify 00:21:48.636 ************************************ 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@21 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:48.636 ************************************ 00:21:48.636 START TEST nvmf_perf 00:21:48.636 ************************************ 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:21:48.636 * Looking for test storage... 00:21:48.636 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:48.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.636 --rc genhtml_branch_coverage=1 00:21:48.636 --rc genhtml_function_coverage=1 00:21:48.636 --rc genhtml_legend=1 00:21:48.636 --rc geninfo_all_blocks=1 00:21:48.636 --rc geninfo_unexecuted_blocks=1 00:21:48.636 00:21:48.636 ' 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:48.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.636 --rc genhtml_branch_coverage=1 00:21:48.636 --rc genhtml_function_coverage=1 00:21:48.636 --rc genhtml_legend=1 00:21:48.636 --rc geninfo_all_blocks=1 00:21:48.636 --rc geninfo_unexecuted_blocks=1 00:21:48.636 00:21:48.636 ' 00:21:48.636 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:48.636 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.636 --rc genhtml_branch_coverage=1 00:21:48.637 --rc genhtml_function_coverage=1 00:21:48.637 --rc genhtml_legend=1 00:21:48.637 --rc geninfo_all_blocks=1 00:21:48.637 --rc geninfo_unexecuted_blocks=1 00:21:48.637 00:21:48.637 ' 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:48.637 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:48.637 --rc genhtml_branch_coverage=1 00:21:48.637 --rc genhtml_function_coverage=1 00:21:48.637 --rc genhtml_legend=1 00:21:48.637 --rc geninfo_all_blocks=1 00:21:48.637 --rc geninfo_unexecuted_blocks=1 00:21:48.637 00:21:48.637 ' 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@50 -- # : 0 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:21:48.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # prepare_net_devs 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # remove_target_ns 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # xtrace_disable 00:21:48.637 09:06:04 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@131 -- # pci_devs=() 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@131 -- # local -a pci_devs 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@132 -- # pci_net_devs=() 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@133 -- # pci_drivers=() 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@133 -- # local -A pci_drivers 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@135 -- # net_devs=() 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@135 -- # local -ga net_devs 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@136 -- # e810=() 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@136 -- # local -ga e810 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@137 -- # x722=() 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@137 -- # local -ga x722 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@138 -- # mlx=() 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@138 -- # local -ga mlx 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:55.208 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:55.208 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:55.208 Found net devices under 0000:86:00.0: cvl_0_0 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:55.208 Found net devices under 0000:86:00.1: cvl_0_1 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # is_hw=yes 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@257 -- # create_target_ns 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@28 -- # local -g _dev 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # ips=() 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772161 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:21:55.208 10.0.0.1 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772162 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:21:55.208 10.0.0.2 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@38 -- # ping_ips 1 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=initiator0 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:21:55.208 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:21:55.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:55.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.440 ms 00:21:55.208 00:21:55.208 --- 10.0.0.1 ping statistics --- 00:21:55.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.208 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=target0 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:21:55.209 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:55.209 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:21:55.209 00:21:55.209 --- 10.0.0.2 ping statistics --- 00:21:55.209 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.209 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # (( pair++ )) 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # return 0 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=initiator0 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=initiator1 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # return 1 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev= 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@169 -- # return 0 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=target0 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev target1 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=target1 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # return 1 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev= 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@169 -- # return 0 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # nvmfpid=2417704 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # waitforlisten 2417704 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 2417704 ']' 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.209 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:55.209 09:06:10 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:55.209 [2024-11-20 09:06:10.623074] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:21:55.209 [2024-11-20 09:06:10.623119] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.209 [2024-11-20 09:06:10.704840] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:55.209 [2024-11-20 09:06:10.749453] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.209 [2024-11-20 09:06:10.749488] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.209 [2024-11-20 09:06:10.749496] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.209 [2024-11-20 09:06:10.749503] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.209 [2024-11-20 09:06:10.749509] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.209 [2024-11-20 09:06:10.750833] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:55.209 [2024-11-20 09:06:10.750971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:55.209 [2024-11-20 09:06:10.750971] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:55.209 [2024-11-20 09:06:10.750977] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.468 09:06:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:55.468 09:06:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:21:55.468 09:06:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:55.468 09:06:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:55.468 09:06:11 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:21:55.468 09:06:11 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:55.468 09:06:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:21:55.468 09:06:11 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:21:58.758 09:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:21:58.758 09:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:21:58.758 09:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:21:58.758 09:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:21:59.017 09:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:21:59.017 09:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:21:59.017 09:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:21:59.017 09:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:21:59.017 09:06:14 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:21:59.277 [2024-11-20 09:06:15.160598] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:59.277 09:06:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:59.535 09:06:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:59.535 09:06:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:59.794 09:06:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:21:59.794 09:06:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:21:59.794 09:06:15 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:00.053 [2024-11-20 09:06:15.983687] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:00.053 09:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:00.312 09:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:00.312 09:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:00.312 09:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:00.312 09:06:16 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:01.690 Initializing NVMe Controllers 00:22:01.690 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:01.690 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:01.690 Initialization complete. Launching workers. 00:22:01.690 ======================================================== 00:22:01.690 Latency(us) 00:22:01.690 Device Information : IOPS MiB/s Average min max 00:22:01.690 PCIE (0000:5e:00.0) NSID 1 from core 0: 97271.13 379.97 328.64 34.15 7215.30 00:22:01.690 ======================================================== 00:22:01.690 Total : 97271.13 379.97 328.64 34.15 7215.30 00:22:01.690 00:22:01.691 09:06:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:02.628 Initializing NVMe Controllers 00:22:02.628 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:02.628 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:02.628 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:02.628 Initialization complete. Launching workers. 00:22:02.628 ======================================================== 00:22:02.628 Latency(us) 00:22:02.628 Device Information : IOPS MiB/s Average min max 00:22:02.628 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 99.00 0.39 10461.19 107.46 45685.31 00:22:02.628 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 51.00 0.20 19681.17 6986.58 47885.40 00:22:02.628 ======================================================== 00:22:02.628 Total : 150.00 0.59 13595.99 107.46 47885.40 00:22:02.628 00:22:02.628 09:06:18 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:04.007 Initializing NVMe Controllers 00:22:04.007 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:04.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:04.007 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:04.007 Initialization complete. Launching workers. 00:22:04.007 ======================================================== 00:22:04.007 Latency(us) 00:22:04.007 Device Information : IOPS MiB/s Average min max 00:22:04.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 10855.06 42.40 2948.08 447.02 6756.68 00:22:04.007 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3797.98 14.84 8450.09 6224.20 16195.20 00:22:04.007 ======================================================== 00:22:04.007 Total : 14653.03 57.24 4374.16 447.02 16195.20 00:22:04.007 00:22:04.007 09:06:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:04.007 09:06:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:04.007 09:06:19 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:06.541 Initializing NVMe Controllers 00:22:06.541 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:06.541 Controller IO queue size 128, less than required. 00:22:06.541 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:06.541 Controller IO queue size 128, less than required. 00:22:06.541 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:06.541 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:06.541 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:06.541 Initialization complete. Launching workers. 00:22:06.541 ======================================================== 00:22:06.541 Latency(us) 00:22:06.541 Device Information : IOPS MiB/s Average min max 00:22:06.541 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1760.20 440.05 73494.72 47774.86 128545.74 00:22:06.541 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 599.90 149.97 224771.35 96112.87 355875.13 00:22:06.541 ======================================================== 00:22:06.541 Total : 2360.10 590.02 111946.74 47774.86 355875.13 00:22:06.541 00:22:06.800 09:06:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:06.800 No valid NVMe controllers or AIO or URING devices found 00:22:06.800 Initializing NVMe Controllers 00:22:06.800 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:06.800 Controller IO queue size 128, less than required. 00:22:06.800 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:06.800 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:06.800 Controller IO queue size 128, less than required. 00:22:06.800 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:06.800 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:06.800 WARNING: Some requested NVMe devices were skipped 00:22:06.800 09:06:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:10.084 Initializing NVMe Controllers 00:22:10.084 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:10.084 Controller IO queue size 128, less than required. 00:22:10.084 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:10.084 Controller IO queue size 128, less than required. 00:22:10.084 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:10.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:10.084 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:10.084 Initialization complete. Launching workers. 00:22:10.084 00:22:10.084 ==================== 00:22:10.084 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:10.084 TCP transport: 00:22:10.084 polls: 14801 00:22:10.084 idle_polls: 11509 00:22:10.084 sock_completions: 3292 00:22:10.084 nvme_completions: 5969 00:22:10.084 submitted_requests: 8836 00:22:10.084 queued_requests: 1 00:22:10.084 00:22:10.084 ==================== 00:22:10.084 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:10.084 TCP transport: 00:22:10.084 polls: 15428 00:22:10.084 idle_polls: 11524 00:22:10.084 sock_completions: 3904 00:22:10.084 nvme_completions: 6613 00:22:10.084 submitted_requests: 9824 00:22:10.084 queued_requests: 1 00:22:10.084 ======================================================== 00:22:10.084 Latency(us) 00:22:10.084 Device Information : IOPS MiB/s Average min max 00:22:10.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1490.25 372.56 87475.05 61805.23 167100.37 00:22:10.084 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1651.06 412.77 78675.90 41504.09 135100.37 00:22:10.084 ======================================================== 00:22:10.084 Total : 3141.32 785.33 82850.25 41504.09 167100.37 00:22:10.084 00:22:10.084 09:06:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:10.084 09:06:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:10.085 09:06:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:10.085 09:06:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:10.085 09:06:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:10.085 09:06:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # nvmfcleanup 00:22:10.085 09:06:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@99 -- # sync 00:22:10.085 09:06:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:22:10.085 09:06:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # set +e 00:22:10.085 09:06:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # for i in {1..20} 00:22:10.085 09:06:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:22:10.085 rmmod nvme_tcp 00:22:10.085 rmmod nvme_fabrics 00:22:10.085 rmmod nvme_keyring 00:22:10.085 09:06:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:22:10.085 09:06:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # set -e 00:22:10.085 09:06:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # return 0 00:22:10.085 09:06:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # '[' -n 2417704 ']' 00:22:10.085 09:06:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@337 -- # killprocess 2417704 00:22:10.085 09:06:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 2417704 ']' 00:22:10.085 09:06:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 2417704 00:22:10.085 09:06:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:22:10.085 09:06:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:10.085 09:06:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2417704 00:22:10.085 09:06:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:10.085 09:06:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:10.085 09:06:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2417704' 00:22:10.085 killing process with pid 2417704 00:22:10.085 09:06:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 2417704 00:22:10.085 09:06:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 2417704 00:22:11.461 09:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:22:11.461 09:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # nvmf_fini 00:22:11.461 09:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@264 -- # local dev 00:22:11.461 09:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@267 -- # remove_target_ns 00:22:11.461 09:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:11.461 09:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:11.461 09:06:27 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:13.369 09:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@268 -- # delete_main_bridge 00:22:13.369 09:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:22:13.369 09:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@130 -- # return 0 00:22:13.369 09:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:22:13.369 09:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:22:13.369 09:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:22:13.369 09:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:22:13.369 09:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:22:13.369 09:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:22:13.369 09:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:22:13.369 09:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:22:13.369 09:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:22:13.369 09:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:22:13.369 09:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:22:13.369 09:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:22:13.369 09:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:22:13.369 09:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:22:13.369 09:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:22:13.369 09:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:22:13.369 09:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:22:13.370 09:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@41 -- # _dev=0 00:22:13.370 09:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@41 -- # dev_map=() 00:22:13.370 09:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@284 -- # iptr 00:22:13.370 09:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@542 -- # iptables-save 00:22:13.370 09:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:22:13.370 09:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@542 -- # iptables-restore 00:22:13.370 00:22:13.370 real 0m25.025s 00:22:13.370 user 1m5.749s 00:22:13.370 sys 0m8.449s 00:22:13.370 09:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:13.370 09:06:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:13.370 ************************************ 00:22:13.370 END TEST nvmf_perf 00:22:13.370 ************************************ 00:22:13.370 09:06:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:13.370 09:06:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:13.370 09:06:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:13.370 09:06:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:13.370 ************************************ 00:22:13.370 START TEST nvmf_fio_host 00:22:13.370 ************************************ 00:22:13.370 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:13.634 * Looking for test storage... 00:22:13.634 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:13.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.634 --rc genhtml_branch_coverage=1 00:22:13.634 --rc genhtml_function_coverage=1 00:22:13.634 --rc genhtml_legend=1 00:22:13.634 --rc geninfo_all_blocks=1 00:22:13.634 --rc geninfo_unexecuted_blocks=1 00:22:13.634 00:22:13.634 ' 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:13.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.634 --rc genhtml_branch_coverage=1 00:22:13.634 --rc genhtml_function_coverage=1 00:22:13.634 --rc genhtml_legend=1 00:22:13.634 --rc geninfo_all_blocks=1 00:22:13.634 --rc geninfo_unexecuted_blocks=1 00:22:13.634 00:22:13.634 ' 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:13.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.634 --rc genhtml_branch_coverage=1 00:22:13.634 --rc genhtml_function_coverage=1 00:22:13.634 --rc genhtml_legend=1 00:22:13.634 --rc geninfo_all_blocks=1 00:22:13.634 --rc geninfo_unexecuted_blocks=1 00:22:13.634 00:22:13.634 ' 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:13.634 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:13.634 --rc genhtml_branch_coverage=1 00:22:13.634 --rc genhtml_function_coverage=1 00:22:13.634 --rc genhtml_legend=1 00:22:13.634 --rc geninfo_all_blocks=1 00:22:13.634 --rc geninfo_unexecuted_blocks=1 00:22:13.634 00:22:13.634 ' 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:13.634 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@50 -- # : 0 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:22:13.635 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # prepare_net_devs 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # local -g is_hw=no 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # remove_target_ns 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # xtrace_disable 00:22:13.635 09:06:29 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@131 -- # pci_devs=() 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@131 -- # local -a pci_devs 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@132 -- # pci_net_devs=() 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@133 -- # pci_drivers=() 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@133 -- # local -A pci_drivers 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@135 -- # net_devs=() 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@135 -- # local -ga net_devs 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@136 -- # e810=() 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@136 -- # local -ga e810 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@137 -- # x722=() 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@137 -- # local -ga x722 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@138 -- # mlx=() 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@138 -- # local -ga mlx 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:20.204 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:20.204 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:20.204 Found net devices under 0000:86:00.0: cvl_0_0 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:20.204 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:20.205 Found net devices under 0000:86:00.1: cvl_0_1 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # is_hw=yes 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@257 -- # create_target_ns 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@27 -- # local -gA dev_map 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@28 -- # local -g _dev 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # ips=() 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772161 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:22:20.205 10.0.0.1 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772162 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:22:20.205 10.0.0.2 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@38 -- # ping_ips 1 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=initiator0 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:22:20.205 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:22:20.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:20.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.375 ms 00:22:20.206 00:22:20.206 --- 10.0.0.1 ping statistics --- 00:22:20.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.206 rtt min/avg/max/mdev = 0.375/0.375/0.375/0.000 ms 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=target0 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:22:20.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:20.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:22:20.206 00:22:20.206 --- 10.0.0.2 ping statistics --- 00:22:20.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:20.206 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # (( pair++ )) 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # return 0 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=initiator0 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=initiator1 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # return 1 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev= 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@169 -- # return 0 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=target0 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev target1 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=target1 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # return 1 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev= 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@169 -- # return 0 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.206 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2423851 00:22:20.207 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:20.207 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:20.207 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2423851 00:22:20.207 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 2423851 ']' 00:22:20.207 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:20.207 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:20.207 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:20.207 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:20.207 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:20.207 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.207 [2024-11-20 09:06:35.701786] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:22:20.207 [2024-11-20 09:06:35.701837] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:20.207 [2024-11-20 09:06:35.783998] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:20.207 [2024-11-20 09:06:35.825179] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:20.207 [2024-11-20 09:06:35.825222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:20.207 [2024-11-20 09:06:35.825230] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:20.207 [2024-11-20 09:06:35.825237] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:20.207 [2024-11-20 09:06:35.825242] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:20.207 [2024-11-20 09:06:35.826709] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:20.207 [2024-11-20 09:06:35.826819] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:20.207 [2024-11-20 09:06:35.826902] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:20.207 [2024-11-20 09:06:35.826903] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:20.207 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:20.207 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:22:20.207 09:06:35 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:20.207 [2024-11-20 09:06:36.108973] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:20.207 09:06:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:22:20.207 09:06:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:20.207 09:06:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.207 09:06:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:22:20.467 Malloc1 00:22:20.467 09:06:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:20.726 09:06:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:22:20.985 09:06:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:20.985 [2024-11-20 09:06:36.969116] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:20.985 09:06:36 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:21.243 09:06:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:22:21.243 09:06:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:21.243 09:06:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:21.243 09:06:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:21.243 09:06:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:21.243 09:06:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:21.243 09:06:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:21.243 09:06:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:21.243 09:06:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:21.243 09:06:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:21.243 09:06:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:21.243 09:06:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:21.243 09:06:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:21.243 09:06:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:21.243 09:06:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:21.243 09:06:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:21.243 09:06:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:21.243 09:06:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:21.243 09:06:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:21.243 09:06:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:21.243 09:06:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:21.243 09:06:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:21.243 09:06:37 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:22:21.501 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:22:21.501 fio-3.35 00:22:21.501 Starting 1 thread 00:22:24.028 00:22:24.028 test: (groupid=0, jobs=1): err= 0: pid=2424428: Wed Nov 20 09:06:39 2024 00:22:24.028 read: IOPS=11.5k, BW=45.0MiB/s (47.2MB/s)(90.3MiB/2005msec) 00:22:24.028 slat (nsec): min=1588, max=238022, avg=1744.92, stdev=2221.66 00:22:24.028 clat (usec): min=3208, max=10755, avg=6129.73, stdev=484.81 00:22:24.028 lat (usec): min=3240, max=10757, avg=6131.47, stdev=484.76 00:22:24.028 clat percentiles (usec): 00:22:24.028 | 1.00th=[ 4948], 5.00th=[ 5407], 10.00th=[ 5538], 20.00th=[ 5735], 00:22:24.028 | 30.00th=[ 5932], 40.00th=[ 5997], 50.00th=[ 6128], 60.00th=[ 6259], 00:22:24.028 | 70.00th=[ 6390], 80.00th=[ 6521], 90.00th=[ 6718], 95.00th=[ 6849], 00:22:24.028 | 99.00th=[ 7177], 99.50th=[ 7373], 99.90th=[ 8586], 99.95th=[ 9896], 00:22:24.028 | 99.99th=[10683] 00:22:24.028 bw ( KiB/s): min=45445, max=46912, per=99.92%, avg=46091.25, stdev=652.44, samples=4 00:22:24.028 iops : min=11361, max=11728, avg=11522.75, stdev=163.19, samples=4 00:22:24.028 write: IOPS=11.5k, BW=44.7MiB/s (46.9MB/s)(89.7MiB/2005msec); 0 zone resets 00:22:24.028 slat (nsec): min=1618, max=233154, avg=1803.61, stdev=1713.95 00:22:24.028 clat (usec): min=2487, max=9674, avg=4970.07, stdev=406.58 00:22:24.028 lat (usec): min=2503, max=9676, avg=4971.87, stdev=406.66 00:22:24.028 clat percentiles (usec): 00:22:24.028 | 1.00th=[ 4047], 5.00th=[ 4359], 10.00th=[ 4490], 20.00th=[ 4686], 00:22:24.028 | 30.00th=[ 4752], 40.00th=[ 4883], 50.00th=[ 4948], 60.00th=[ 5080], 00:22:24.028 | 70.00th=[ 5145], 80.00th=[ 5276], 90.00th=[ 5407], 95.00th=[ 5538], 00:22:24.028 | 99.00th=[ 5866], 99.50th=[ 6259], 99.90th=[ 7898], 99.95th=[ 8586], 00:22:24.028 | 99.99th=[ 9241] 00:22:24.028 bw ( KiB/s): min=45632, max=45888, per=99.93%, avg=45787.00, stdev=118.90, samples=4 00:22:24.028 iops : min=11408, max=11472, avg=11446.75, stdev=29.73, samples=4 00:22:24.028 lat (msec) : 4=0.45%, 10=99.53%, 20=0.02% 00:22:24.028 cpu : usr=72.75%, sys=26.20%, ctx=77, majf=0, minf=3 00:22:24.028 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:22:24.028 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:24.028 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:24.028 issued rwts: total=23122,22967,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:24.028 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:24.028 00:22:24.028 Run status group 0 (all jobs): 00:22:24.028 READ: bw=45.0MiB/s (47.2MB/s), 45.0MiB/s-45.0MiB/s (47.2MB/s-47.2MB/s), io=90.3MiB (94.7MB), run=2005-2005msec 00:22:24.028 WRITE: bw=44.7MiB/s (46.9MB/s), 44.7MiB/s-44.7MiB/s (46.9MB/s-46.9MB/s), io=89.7MiB (94.1MB), run=2005-2005msec 00:22:24.028 09:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:24.028 09:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:24.028 09:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:22:24.028 09:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:22:24.028 09:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:22:24.028 09:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:24.028 09:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:22:24.028 09:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:22:24.029 09:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:24.029 09:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:24.029 09:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:22:24.029 09:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:24.029 09:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:24.029 09:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:24.029 09:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:22:24.029 09:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:22:24.029 09:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:22:24.029 09:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:22:24.029 09:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:22:24.029 09:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:22:24.029 09:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:22:24.029 09:06:39 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:22:24.287 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:22:24.287 fio-3.35 00:22:24.287 Starting 1 thread 00:22:26.816 00:22:26.816 test: (groupid=0, jobs=1): err= 0: pid=2424996: Wed Nov 20 09:06:42 2024 00:22:26.816 read: IOPS=10.8k, BW=168MiB/s (177MB/s)(338MiB/2005msec) 00:22:26.816 slat (nsec): min=2539, max=86689, avg=2816.11, stdev=1252.85 00:22:26.816 clat (usec): min=1468, max=13621, avg=6828.19, stdev=1609.11 00:22:26.816 lat (usec): min=1471, max=13627, avg=6831.01, stdev=1609.24 00:22:26.816 clat percentiles (usec): 00:22:26.816 | 1.00th=[ 3687], 5.00th=[ 4424], 10.00th=[ 4817], 20.00th=[ 5407], 00:22:26.816 | 30.00th=[ 5866], 40.00th=[ 6325], 50.00th=[ 6783], 60.00th=[ 7242], 00:22:26.816 | 70.00th=[ 7701], 80.00th=[ 8029], 90.00th=[ 8848], 95.00th=[ 9503], 00:22:26.816 | 99.00th=[11076], 99.50th=[11600], 99.90th=[12780], 99.95th=[12911], 00:22:26.816 | 99.99th=[12911] 00:22:26.816 bw ( KiB/s): min=83168, max=95872, per=50.86%, avg=87696.00, stdev=5733.66, samples=4 00:22:26.816 iops : min= 5198, max= 5992, avg=5481.00, stdev=358.35, samples=4 00:22:26.816 write: IOPS=6385, BW=99.8MiB/s (105MB/s)(179MiB/1796msec); 0 zone resets 00:22:26.816 slat (usec): min=29, max=381, avg=31.61, stdev= 7.32 00:22:26.816 clat (usec): min=3311, max=15165, avg=8759.92, stdev=1505.50 00:22:26.816 lat (usec): min=3341, max=15276, avg=8791.52, stdev=1506.87 00:22:26.816 clat percentiles (usec): 00:22:26.816 | 1.00th=[ 5669], 5.00th=[ 6652], 10.00th=[ 7046], 20.00th=[ 7504], 00:22:26.816 | 30.00th=[ 7898], 40.00th=[ 8225], 50.00th=[ 8586], 60.00th=[ 8979], 00:22:26.816 | 70.00th=[ 9372], 80.00th=[ 9896], 90.00th=[10814], 95.00th=[11600], 00:22:26.816 | 99.00th=[12649], 99.50th=[13173], 99.90th=[14615], 99.95th=[14877], 00:22:26.816 | 99.99th=[15008] 00:22:26.816 bw ( KiB/s): min=87424, max=99712, per=89.39%, avg=91328.00, stdev=5781.77, samples=4 00:22:26.816 iops : min= 5464, max= 6232, avg=5708.00, stdev=361.36, samples=4 00:22:26.816 lat (msec) : 2=0.03%, 4=1.42%, 10=89.67%, 20=8.88% 00:22:26.816 cpu : usr=86.33%, sys=12.97%, ctx=46, majf=0, minf=3 00:22:26.816 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:22:26.816 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:22:26.816 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:22:26.816 issued rwts: total=21607,11468,0,0 short=0,0,0,0 dropped=0,0,0,0 00:22:26.816 latency : target=0, window=0, percentile=100.00%, depth=128 00:22:26.816 00:22:26.816 Run status group 0 (all jobs): 00:22:26.816 READ: bw=168MiB/s (177MB/s), 168MiB/s-168MiB/s (177MB/s-177MB/s), io=338MiB (354MB), run=2005-2005msec 00:22:26.816 WRITE: bw=99.8MiB/s (105MB/s), 99.8MiB/s-99.8MiB/s (105MB/s-105MB/s), io=179MiB (188MB), run=1796-1796msec 00:22:26.816 09:06:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:26.816 09:06:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:22:26.816 09:06:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:22:26.816 09:06:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:22:26.816 09:06:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:22:26.816 09:06:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # nvmfcleanup 00:22:26.816 09:06:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@99 -- # sync 00:22:26.816 09:06:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:22:26.816 09:06:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # set +e 00:22:26.816 09:06:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # for i in {1..20} 00:22:26.816 09:06:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:22:26.816 rmmod nvme_tcp 00:22:26.816 rmmod nvme_fabrics 00:22:26.816 rmmod nvme_keyring 00:22:26.816 09:06:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:22:26.816 09:06:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # set -e 00:22:26.816 09:06:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # return 0 00:22:26.816 09:06:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # '[' -n 2423851 ']' 00:22:26.816 09:06:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@337 -- # killprocess 2423851 00:22:26.816 09:06:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 2423851 ']' 00:22:26.816 09:06:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 2423851 00:22:26.816 09:06:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:22:26.816 09:06:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:26.816 09:06:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2423851 00:22:27.076 09:06:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:27.076 09:06:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:27.076 09:06:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2423851' 00:22:27.076 killing process with pid 2423851 00:22:27.076 09:06:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 2423851 00:22:27.076 09:06:42 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 2423851 00:22:27.076 09:06:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:22:27.076 09:06:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # nvmf_fini 00:22:27.076 09:06:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@264 -- # local dev 00:22:27.076 09:06:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@267 -- # remove_target_ns 00:22:27.076 09:06:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:27.076 09:06:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:27.076 09:06:43 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@268 -- # delete_main_bridge 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@130 -- # return 0 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@41 -- # _dev=0 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@41 -- # dev_map=() 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@284 -- # iptr 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@542 -- # iptables-save 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@542 -- # iptables-restore 00:22:29.614 00:22:29.614 real 0m15.739s 00:22:29.614 user 0m45.962s 00:22:29.614 sys 0m6.512s 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.614 ************************************ 00:22:29.614 END TEST nvmf_fio_host 00:22:29.614 ************************************ 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.614 ************************************ 00:22:29.614 START TEST nvmf_failover 00:22:29.614 ************************************ 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:22:29.614 * Looking for test storage... 00:22:29.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:29.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.614 --rc genhtml_branch_coverage=1 00:22:29.614 --rc genhtml_function_coverage=1 00:22:29.614 --rc genhtml_legend=1 00:22:29.614 --rc geninfo_all_blocks=1 00:22:29.614 --rc geninfo_unexecuted_blocks=1 00:22:29.614 00:22:29.614 ' 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:29.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.614 --rc genhtml_branch_coverage=1 00:22:29.614 --rc genhtml_function_coverage=1 00:22:29.614 --rc genhtml_legend=1 00:22:29.614 --rc geninfo_all_blocks=1 00:22:29.614 --rc geninfo_unexecuted_blocks=1 00:22:29.614 00:22:29.614 ' 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:29.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.614 --rc genhtml_branch_coverage=1 00:22:29.614 --rc genhtml_function_coverage=1 00:22:29.614 --rc genhtml_legend=1 00:22:29.614 --rc geninfo_all_blocks=1 00:22:29.614 --rc geninfo_unexecuted_blocks=1 00:22:29.614 00:22:29.614 ' 00:22:29.614 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:29.614 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:29.614 --rc genhtml_branch_coverage=1 00:22:29.614 --rc genhtml_function_coverage=1 00:22:29.615 --rc genhtml_legend=1 00:22:29.615 --rc geninfo_all_blocks=1 00:22:29.615 --rc geninfo_unexecuted_blocks=1 00:22:29.615 00:22:29.615 ' 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@50 -- # : 0 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:22:29.615 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@54 -- # have_pci_nics=0 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # prepare_net_devs 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # local -g is_hw=no 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # remove_target_ns 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # xtrace_disable 00:22:29.615 09:06:45 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:36.186 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:36.186 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@131 -- # pci_devs=() 00:22:36.186 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@131 -- # local -a pci_devs 00:22:36.186 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@132 -- # pci_net_devs=() 00:22:36.186 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:22:36.186 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@133 -- # pci_drivers=() 00:22:36.186 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@133 -- # local -A pci_drivers 00:22:36.186 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@135 -- # net_devs=() 00:22:36.186 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@135 -- # local -ga net_devs 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@136 -- # e810=() 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@136 -- # local -ga e810 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@137 -- # x722=() 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@137 -- # local -ga x722 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@138 -- # mlx=() 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@138 -- # local -ga mlx 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:36.187 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:36.187 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:36.187 Found net devices under 0000:86:00.0: cvl_0_0 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:36.187 Found net devices under 0000:86:00.1: cvl_0_1 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # is_hw=yes 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@257 -- # create_target_ns 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@27 -- # local -gA dev_map 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@28 -- # local -g _dev 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # ips=() 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772161 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:22:36.187 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:22:36.188 10.0.0.1 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772162 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:22:36.188 10.0.0.2 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@38 -- # ping_ips 1 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=initiator0 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:22:36.188 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:36.188 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.380 ms 00:22:36.188 00:22:36.188 --- 10.0.0.1 ping statistics --- 00:22:36.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.188 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=target0 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:22:36.188 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:36.188 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:22:36.188 00:22:36.188 --- 10.0.0.2 ping statistics --- 00:22:36.188 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:36.188 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # (( pair++ )) 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # return 0 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=initiator0 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:22:36.188 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=initiator1 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # return 1 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev= 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@169 -- # return 0 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=target0 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev target1 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=target1 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # return 1 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev= 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@169 -- # return 0 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # nvmfpid=2428839 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # waitforlisten 2428839 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2428839 ']' 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:36.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:36.189 [2024-11-20 09:06:51.528505] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:22:36.189 [2024-11-20 09:06:51.528556] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:36.189 [2024-11-20 09:06:51.609616] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:22:36.189 [2024-11-20 09:06:51.654092] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:36.189 [2024-11-20 09:06:51.654132] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:36.189 [2024-11-20 09:06:51.654139] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:36.189 [2024-11-20 09:06:51.654146] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:36.189 [2024-11-20 09:06:51.654151] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:36.189 [2024-11-20 09:06:51.655559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:36.189 [2024-11-20 09:06:51.655667] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:36.189 [2024-11-20 09:06:51.655668] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:22:36.189 [2024-11-20 09:06:51.968814] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:36.189 09:06:51 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:22:36.189 Malloc0 00:22:36.448 09:06:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:36.448 09:06:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:36.750 09:06:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:37.101 [2024-11-20 09:06:52.793686] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:37.101 09:06:52 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:37.101 [2024-11-20 09:06:53.006257] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:37.101 09:06:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:37.360 [2024-11-20 09:06:53.214907] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:37.360 09:06:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2429261 00:22:37.360 09:06:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:22:37.360 09:06:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:37.360 09:06:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2429261 /var/tmp/bdevperf.sock 00:22:37.360 09:06:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2429261 ']' 00:22:37.360 09:06:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:37.360 09:06:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:37.360 09:06:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:37.360 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:37.360 09:06:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:37.360 09:06:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:37.620 09:06:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:37.620 09:06:53 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:37.620 09:06:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:37.879 NVMe0n1 00:22:37.879 09:06:53 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:38.138 00:22:38.397 09:06:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2429283 00:22:38.397 09:06:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:38.397 09:06:54 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:22:39.335 09:06:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:39.595 [2024-11-20 09:06:55.379445] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379526] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379580] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379620] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379693] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379705] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379723] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379734] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379741] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379746] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379752] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379758] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379764] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379770] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379775] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379811] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379817] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379828] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 [2024-11-20 09:06:55.379833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x8672d0 is same with the state(6) to be set 00:22:39.595 09:06:55 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:22:42.886 09:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:42.886 00:22:42.886 09:06:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:43.147 [2024-11-20 09:06:59.050313] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050383] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050395] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050401] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050412] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050418] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050424] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050434] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050440] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050446] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050452] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050481] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050487] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050533] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050599] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050605] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050628] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050634] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050640] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050652] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050658] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.147 [2024-11-20 09:06:59.050664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050671] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050689] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050702] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050708] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050714] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050743] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050755] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050767] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050779] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050784] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050792] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050829] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050834] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050840] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050845] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 [2024-11-20 09:06:59.050868] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x868060 is same with the state(6) to be set 00:22:43.148 09:06:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:22:46.436 09:07:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:46.436 [2024-11-20 09:07:02.274469] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:46.436 09:07:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:22:47.373 09:07:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:47.632 09:07:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2429283 00:22:54.212 { 00:22:54.212 "results": [ 00:22:54.212 { 00:22:54.212 "job": "NVMe0n1", 00:22:54.212 "core_mask": "0x1", 00:22:54.212 "workload": "verify", 00:22:54.212 "status": "finished", 00:22:54.212 "verify_range": { 00:22:54.212 "start": 0, 00:22:54.212 "length": 16384 00:22:54.212 }, 00:22:54.212 "queue_depth": 128, 00:22:54.212 "io_size": 4096, 00:22:54.212 "runtime": 15.003444, 00:22:54.212 "iops": 11097.252070924516, 00:22:54.212 "mibps": 43.34864090204889, 00:22:54.212 "io_failed": 4253, 00:22:54.212 "io_timeout": 0, 00:22:54.212 "avg_latency_us": 11224.747481051627, 00:22:54.212 "min_latency_us": 436.31304347826085, 00:22:54.212 "max_latency_us": 21427.42260869565 00:22:54.212 } 00:22:54.212 ], 00:22:54.212 "core_count": 1 00:22:54.212 } 00:22:54.212 09:07:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2429261 00:22:54.212 09:07:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2429261 ']' 00:22:54.212 09:07:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2429261 00:22:54.212 09:07:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:22:54.212 09:07:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:54.212 09:07:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2429261 00:22:54.212 09:07:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:54.212 09:07:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:54.212 09:07:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2429261' 00:22:54.212 killing process with pid 2429261 00:22:54.212 09:07:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2429261 00:22:54.212 09:07:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2429261 00:22:54.212 09:07:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:54.212 [2024-11-20 09:06:53.291641] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:22:54.212 [2024-11-20 09:06:53.291698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2429261 ] 00:22:54.212 [2024-11-20 09:06:53.367738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:54.212 [2024-11-20 09:06:53.409859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:54.212 Running I/O for 15 seconds... 00:22:54.212 11359.00 IOPS, 44.37 MiB/s [2024-11-20T08:07:10.253Z] [2024-11-20 09:06:55.381854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.212 [2024-11-20 09:06:55.381886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.212 [2024-11-20 09:06:55.381902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.212 [2024-11-20 09:06:55.381910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.212 [2024-11-20 09:06:55.381919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.212 [2024-11-20 09:06:55.381926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.381935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:100008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.381942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.381957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:100016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.381965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.381973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:100024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.381979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.381988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:100032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.381995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:100040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:100048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:100056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:100072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:100080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:100096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:100104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:100112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:100128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:100144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:100152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:100160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:100168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:100184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:100192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:100200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.213 [2024-11-20 09:06:55.382511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.213 [2024-11-20 09:06:55.382519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.382526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.382540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:100336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.382555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:100344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.382569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:100352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.382584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.382598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:100368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.382615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:100376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.382630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:100384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.382644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:100392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.382659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.382673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:100408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.382687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:100416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.382701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:100424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.382715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:100432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.382730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:100440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.382744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:100448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.382758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.382773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:100464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.382787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:100472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.382803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:100480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.382817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:100488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.382831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:100496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.382846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.214 [2024-11-20 09:06:55.382860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.214 [2024-11-20 09:06:55.382876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.214 [2024-11-20 09:06:55.382891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.214 [2024-11-20 09:06:55.382906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.214 [2024-11-20 09:06:55.382920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.214 [2024-11-20 09:06:55.382935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.214 [2024-11-20 09:06:55.382953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:100504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.382967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.382981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.382995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:100520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.383002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.383010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:100528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.383017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.383025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:100536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.383031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.383040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:100544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.383047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.383055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:100552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.383061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.383070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:100560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.383076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.383084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:100568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.383091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.214 [2024-11-20 09:06:55.383099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:100576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.214 [2024-11-20 09:06:55.383105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.215 [2024-11-20 09:06:55.383113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.215 [2024-11-20 09:06:55.383120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.215 [2024-11-20 09:06:55.383129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:100592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.215 [2024-11-20 09:06:55.383135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.215 [2024-11-20 09:06:55.383143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:100600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.215 [2024-11-20 09:06:55.383149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.215 [2024-11-20 09:06:55.383157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:100608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.215 [2024-11-20 09:06:55.383163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.215 [2024-11-20 09:06:55.383171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:100616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.215 [2024-11-20 09:06:55.383179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.215 [2024-11-20 09:06:55.383187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.215 [2024-11-20 09:06:55.383193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.215 [2024-11-20 09:06:55.383202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:100632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.215 [2024-11-20 09:06:55.383208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.215 [2024-11-20 09:06:55.383217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.215 [2024-11-20 09:06:55.383223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.215 [2024-11-20 09:06:55.383231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:100648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.215 [2024-11-20 09:06:55.383238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.215 [2024-11-20 09:06:55.383245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:100656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.215 [2024-11-20 09:06:55.383252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.215 [2024-11-20 09:06:55.383260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:100664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.215 [2024-11-20 09:06:55.383267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.215 [2024-11-20 09:06:55.383275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.215 [2024-11-20 09:06:55.383281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.215 [2024-11-20 09:06:55.383289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:100680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.215 [2024-11-20 09:06:55.383295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.215 [2024-11-20 09:06:55.383303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:100688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.215 [2024-11-20 09:06:55.383310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.215 [2024-11-20 09:06:55.383318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:100696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.215 [2024-11-20 09:06:55.383325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.215 [2024-11-20 09:06:55.383333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:100704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.215 [2024-11-20 09:06:55.383339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.215 [2024-11-20 09:06:55.383347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:100712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.215 [2024-11-20 09:06:55.383353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.215 [2024-11-20 09:06:55.383363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:100720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.215 [2024-11-20 09:06:55.383370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.215 [2024-11-20 09:06:55.383390] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.215 [2024-11-20 09:06:55.383397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100728 len:8 PRP1 0x0 PRP2 0x0 00:22:54.215 [2024-11-20 09:06:55.383404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.215 [2024-11-20 09:06:55.383413] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.215 [2024-11-20 09:06:55.383418] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.215 [2024-11-20 09:06:55.383424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100736 len:8 PRP1 0x0 PRP2 0x0 00:22:54.215 [2024-11-20 09:06:55.383430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.215 [2024-11-20 09:06:55.383437] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.215 [2024-11-20 09:06:55.383445] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.215 [2024-11-20 09:06:55.383451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100744 len:8 PRP1 0x0 PRP2 0x0 00:22:54.215 [2024-11-20 09:06:55.383457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.215 [2024-11-20 09:06:55.383464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.215 [2024-11-20 09:06:55.383469] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.215 [2024-11-20 09:06:55.383474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100752 len:8 PRP1 0x0 PRP2 0x0 00:22:54.215 [2024-11-20 09:06:55.383481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.215 [2024-11-20 09:06:55.383487] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.215 [2024-11-20 09:06:55.383492] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.215 [2024-11-20 09:06:55.383497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99928 len:8 PRP1 0x0 PRP2 0x0 00:22:54.215 [2024-11-20 09:06:55.383504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.215 [2024-11-20 09:06:55.383511] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.215 [2024-11-20 09:06:55.383516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.215 [2024-11-20 09:06:55.383521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99936 len:8 PRP1 0x0 PRP2 0x0 00:22:54.215 [2024-11-20 09:06:55.383528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.215 [2024-11-20 09:06:55.383534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.215 [2024-11-20 09:06:55.383539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.215 [2024-11-20 09:06:55.383546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99944 len:8 PRP1 0x0 PRP2 0x0 00:22:54.215 [2024-11-20 09:06:55.383552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.215 [2024-11-20 09:06:55.383559] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.215 [2024-11-20 09:06:55.383566] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.215 [2024-11-20 09:06:55.383572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99952 len:8 PRP1 0x0 PRP2 0x0 00:22:54.215 [2024-11-20 09:06:55.383578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.215 [2024-11-20 09:06:55.383585] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.215 [2024-11-20 09:06:55.383590] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.215 [2024-11-20 09:06:55.383595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99960 len:8 PRP1 0x0 PRP2 0x0 00:22:54.215 [2024-11-20 09:06:55.383601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.215 [2024-11-20 09:06:55.383608] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.215 [2024-11-20 09:06:55.383613] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.215 [2024-11-20 09:06:55.383618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99968 len:8 PRP1 0x0 PRP2 0x0 00:22:54.215 [2024-11-20 09:06:55.383624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.215 [2024-11-20 09:06:55.383631] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.215 [2024-11-20 09:06:55.383637] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.215 [2024-11-20 09:06:55.383643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99976 len:8 PRP1 0x0 PRP2 0x0 00:22:54.215 [2024-11-20 09:06:55.383649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.215 [2024-11-20 09:06:55.383655] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.215 [2024-11-20 09:06:55.383660] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.215 [2024-11-20 09:06:55.383666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99984 len:8 PRP1 0x0 PRP2 0x0 00:22:54.215 [2024-11-20 09:06:55.383672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.215 [2024-11-20 09:06:55.383679] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.215 [2024-11-20 09:06:55.383684] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.216 [2024-11-20 09:06:55.383689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100760 len:8 PRP1 0x0 PRP2 0x0 00:22:54.216 [2024-11-20 09:06:55.383695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.216 [2024-11-20 09:06:55.383701] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.216 [2024-11-20 09:06:55.383706] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.216 [2024-11-20 09:06:55.383712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100768 len:8 PRP1 0x0 PRP2 0x0 00:22:54.216 [2024-11-20 09:06:55.383718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.216 [2024-11-20 09:06:55.383725] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.216 [2024-11-20 09:06:55.383729] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.216 [2024-11-20 09:06:55.383737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100776 len:8 PRP1 0x0 PRP2 0x0 00:22:54.216 [2024-11-20 09:06:55.383743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.216 [2024-11-20 09:06:55.383751] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.216 [2024-11-20 09:06:55.383756] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.216 [2024-11-20 09:06:55.383761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100784 len:8 PRP1 0x0 PRP2 0x0 00:22:54.216 [2024-11-20 09:06:55.383768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.216 [2024-11-20 09:06:55.383775] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.216 [2024-11-20 09:06:55.383780] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.216 [2024-11-20 09:06:55.383785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100792 len:8 PRP1 0x0 PRP2 0x0 00:22:54.216 [2024-11-20 09:06:55.383791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.216 [2024-11-20 09:06:55.383798] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.216 [2024-11-20 09:06:55.383803] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.216 [2024-11-20 09:06:55.383808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100800 len:8 PRP1 0x0 PRP2 0x0 00:22:54.216 [2024-11-20 09:06:55.383814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.216 [2024-11-20 09:06:55.383820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.216 [2024-11-20 09:06:55.383826] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.216 [2024-11-20 09:06:55.383832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100808 len:8 PRP1 0x0 PRP2 0x0 00:22:54.216 [2024-11-20 09:06:55.383838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.216 [2024-11-20 09:06:55.383845] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.216 [2024-11-20 09:06:55.383850] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.216 [2024-11-20 09:06:55.383855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100816 len:8 PRP1 0x0 PRP2 0x0 00:22:54.216 [2024-11-20 09:06:55.383861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.216 [2024-11-20 09:06:55.383868] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.216 [2024-11-20 09:06:55.383873] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.216 [2024-11-20 09:06:55.383878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100824 len:8 PRP1 0x0 PRP2 0x0 00:22:54.216 [2024-11-20 09:06:55.383885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.216 [2024-11-20 09:06:55.383891] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.216 [2024-11-20 09:06:55.383896] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.216 [2024-11-20 09:06:55.383901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100832 len:8 PRP1 0x0 PRP2 0x0 00:22:54.216 [2024-11-20 09:06:55.383908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.216 [2024-11-20 09:06:55.383914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.216 [2024-11-20 09:06:55.383919] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.216 [2024-11-20 09:06:55.383926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100840 len:8 PRP1 0x0 PRP2 0x0 00:22:54.216 [2024-11-20 09:06:55.383934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.216 [2024-11-20 09:06:55.383941] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.216 [2024-11-20 09:06:55.383945] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.216 [2024-11-20 09:06:55.383954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100848 len:8 PRP1 0x0 PRP2 0x0 00:22:54.216 [2024-11-20 09:06:55.383961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.216 [2024-11-20 09:06:55.394625] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.216 [2024-11-20 09:06:55.394634] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.216 [2024-11-20 09:06:55.394641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100856 len:8 PRP1 0x0 PRP2 0x0 00:22:54.216 [2024-11-20 09:06:55.394648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.216 [2024-11-20 09:06:55.394656] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.216 [2024-11-20 09:06:55.394661] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.216 [2024-11-20 09:06:55.394668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100864 len:8 PRP1 0x0 PRP2 0x0 00:22:54.216 [2024-11-20 09:06:55.394675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.216 [2024-11-20 09:06:55.394682] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.216 [2024-11-20 09:06:55.394688] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.216 [2024-11-20 09:06:55.394693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100872 len:8 PRP1 0x0 PRP2 0x0 00:22:54.216 [2024-11-20 09:06:55.394699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.216 [2024-11-20 09:06:55.394706] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.216 [2024-11-20 09:06:55.394710] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.216 [2024-11-20 09:06:55.394716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100880 len:8 PRP1 0x0 PRP2 0x0 00:22:54.216 [2024-11-20 09:06:55.394723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.216 [2024-11-20 09:06:55.394766] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:22:54.216 [2024-11-20 09:06:55.394789] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.216 [2024-11-20 09:06:55.394797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.216 [2024-11-20 09:06:55.394804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.216 [2024-11-20 09:06:55.394811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.216 [2024-11-20 09:06:55.394818] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.216 [2024-11-20 09:06:55.394824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.216 [2024-11-20 09:06:55.394832] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.216 [2024-11-20 09:06:55.394840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.216 [2024-11-20 09:06:55.394847] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:22:54.216 [2024-11-20 09:06:55.394886] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb1340 (9): Bad file descriptor 00:22:54.216 [2024-11-20 09:06:55.398767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:22:54.216 [2024-11-20 09:06:55.429532] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:22:54.216 11046.00 IOPS, 43.15 MiB/s [2024-11-20T08:07:10.257Z] 11108.33 IOPS, 43.39 MiB/s [2024-11-20T08:07:10.257Z] 11127.50 IOPS, 43.47 MiB/s [2024-11-20T08:07:10.257Z] [2024-11-20 09:06:59.052168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:41976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.216 [2024-11-20 09:06:59.052204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.216 [2024-11-20 09:06:59.052220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:41984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.216 [2024-11-20 09:06:59.052228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.216 [2024-11-20 09:06:59.052236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:41992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.216 [2024-11-20 09:06:59.052243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.216 [2024-11-20 09:06:59.052252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:42000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.216 [2024-11-20 09:06:59.052258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.216 [2024-11-20 09:06:59.052267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:42008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.216 [2024-11-20 09:06:59.052273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.216 [2024-11-20 09:06:59.052282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:42016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.217 [2024-11-20 09:06:59.052288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.217 [2024-11-20 09:06:59.052296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:42024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.217 [2024-11-20 09:06:59.052302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.217 [2024-11-20 09:06:59.052311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:42032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.217 [2024-11-20 09:06:59.052318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.217 [2024-11-20 09:06:59.052326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:42040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.217 [2024-11-20 09:06:59.052333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.217 [2024-11-20 09:06:59.052341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:42048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.217 [2024-11-20 09:06:59.052348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.217 [2024-11-20 09:06:59.052360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:42056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.217 [2024-11-20 09:06:59.052367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.217 [2024-11-20 09:06:59.052375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:42064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.217 [2024-11-20 09:06:59.052382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.217 [2024-11-20 09:06:59.052390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:42072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.217 [2024-11-20 09:06:59.052396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.217 [2024-11-20 09:06:59.052405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:42080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.217 [2024-11-20 09:06:59.052411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.217 [2024-11-20 09:06:59.052419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:42088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.217 [2024-11-20 09:06:59.052426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.217 [2024-11-20 09:06:59.052434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:42096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.217 [2024-11-20 09:06:59.052441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.217 [2024-11-20 09:06:59.052449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.217 [2024-11-20 09:06:59.052455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.217 [2024-11-20 09:06:59.052467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:42112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.217 [2024-11-20 09:06:59.052473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.217 [2024-11-20 09:06:59.052482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:42120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.217 [2024-11-20 09:06:59.052489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.217 [2024-11-20 09:06:59.052497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:42128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.217 [2024-11-20 09:06:59.052505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.217 [2024-11-20 09:06:59.052513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:42136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.217 [2024-11-20 09:06:59.052520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.217 [2024-11-20 09:06:59.052528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:42144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.217 [2024-11-20 09:06:59.052534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.217 [2024-11-20 09:06:59.052542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:42152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.217 [2024-11-20 09:06:59.052554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.217 [2024-11-20 09:06:59.052563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:42160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.217 [2024-11-20 09:06:59.052569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.217 [2024-11-20 09:06:59.052577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:42168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.217 [2024-11-20 09:06:59.052583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.217 [2024-11-20 09:06:59.052591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:42176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.217 [2024-11-20 09:06:59.052598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.217 [2024-11-20 09:06:59.052606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:42184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.217 [2024-11-20 09:06:59.052612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.217 [2024-11-20 09:06:59.052620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:42192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.217 [2024-11-20 09:06:59.052626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.217 [2024-11-20 09:06:59.052634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:42200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.217 [2024-11-20 09:06:59.052641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.217 [2024-11-20 09:06:59.052649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:42208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.217 [2024-11-20 09:06:59.052655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.217 [2024-11-20 09:06:59.052663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:42216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.217 [2024-11-20 09:06:59.052669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.217 [2024-11-20 09:06:59.052677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:42224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.217 [2024-11-20 09:06:59.052684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.052692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:42232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.218 [2024-11-20 09:06:59.052699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.052708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:42240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.218 [2024-11-20 09:06:59.052714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.052722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:42248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.218 [2024-11-20 09:06:59.052729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.052737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:42256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.218 [2024-11-20 09:06:59.052745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.052753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:42264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.218 [2024-11-20 09:06:59.052760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.052768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:42272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.218 [2024-11-20 09:06:59.052774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.052782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:42280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.218 [2024-11-20 09:06:59.052789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.052797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:42352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.218 [2024-11-20 09:06:59.052803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.052811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:42360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.218 [2024-11-20 09:06:59.052818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.052826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:42368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.218 [2024-11-20 09:06:59.052832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.052840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:42376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.218 [2024-11-20 09:06:59.052846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.052854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:42384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.218 [2024-11-20 09:06:59.052861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.052869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:42392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.218 [2024-11-20 09:06:59.052875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.052883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:42400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.218 [2024-11-20 09:06:59.052889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.052897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:42408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.218 [2024-11-20 09:06:59.052903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.052912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:42416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.218 [2024-11-20 09:06:59.052919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.052929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:42424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.218 [2024-11-20 09:06:59.052935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.052944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:42432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.218 [2024-11-20 09:06:59.052958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.052966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:42440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.218 [2024-11-20 09:06:59.052973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.052981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:42448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.218 [2024-11-20 09:06:59.052987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.052995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:42456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.218 [2024-11-20 09:06:59.053002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.053009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:42464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.218 [2024-11-20 09:06:59.053016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.053024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:42472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.218 [2024-11-20 09:06:59.053030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.053038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:42480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.218 [2024-11-20 09:06:59.053044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.053052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:42488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.218 [2024-11-20 09:06:59.053058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.053066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:42496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.218 [2024-11-20 09:06:59.053073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.053081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:42504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.218 [2024-11-20 09:06:59.053087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.053095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:42512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.218 [2024-11-20 09:06:59.053102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.053111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:42520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.218 [2024-11-20 09:06:59.053119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.053127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:42528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.218 [2024-11-20 09:06:59.053134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.053142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:42536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.218 [2024-11-20 09:06:59.053148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.053155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:42544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.218 [2024-11-20 09:06:59.053162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.053170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:42552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.218 [2024-11-20 09:06:59.053177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.053185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:42560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.218 [2024-11-20 09:06:59.053192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.053200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:42568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.218 [2024-11-20 09:06:59.053206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.053214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:42576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.218 [2024-11-20 09:06:59.053220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.053228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:42584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.218 [2024-11-20 09:06:59.053235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.053243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:42592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.218 [2024-11-20 09:06:59.053249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.218 [2024-11-20 09:06:59.053256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:42600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:42608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:42616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:42624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:42632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:42640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:42648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:42656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:42664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:42672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:42680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:42288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.219 [2024-11-20 09:06:59.053421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:42688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:42696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:42704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:42712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:42720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:42728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:42736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:42744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:42752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:42760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:42768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:42776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:42784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:42792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:42800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:42808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:42816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:42824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:42832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:42840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:42848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:42856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:42864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:42872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:42880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:42888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:42896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.219 [2024-11-20 09:06:59.053823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:42904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.219 [2024-11-20 09:06:59.053830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.220 [2024-11-20 09:06:59.053837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:42912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.220 [2024-11-20 09:06:59.053844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.220 [2024-11-20 09:06:59.053853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:42920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.220 [2024-11-20 09:06:59.053860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.220 [2024-11-20 09:06:59.053867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:42928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.220 [2024-11-20 09:06:59.053875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.220 [2024-11-20 09:06:59.053883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:42936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.220 [2024-11-20 09:06:59.053889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.220 [2024-11-20 09:06:59.053909] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.220 [2024-11-20 09:06:59.053916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42944 len:8 PRP1 0x0 PRP2 0x0 00:22:54.220 [2024-11-20 09:06:59.053923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.220 [2024-11-20 09:06:59.053932] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.220 [2024-11-20 09:06:59.053937] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.220 [2024-11-20 09:06:59.053943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42952 len:8 PRP1 0x0 PRP2 0x0 00:22:54.220 [2024-11-20 09:06:59.053954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.220 [2024-11-20 09:06:59.053960] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.220 [2024-11-20 09:06:59.053965] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.220 [2024-11-20 09:06:59.053972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42960 len:8 PRP1 0x0 PRP2 0x0 00:22:54.220 [2024-11-20 09:06:59.053979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.220 [2024-11-20 09:06:59.053986] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.220 [2024-11-20 09:06:59.053991] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.220 [2024-11-20 09:06:59.053996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42968 len:8 PRP1 0x0 PRP2 0x0 00:22:54.220 [2024-11-20 09:06:59.054002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.220 [2024-11-20 09:06:59.054009] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.220 [2024-11-20 09:06:59.054014] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.220 [2024-11-20 09:06:59.054019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42976 len:8 PRP1 0x0 PRP2 0x0 00:22:54.220 [2024-11-20 09:06:59.054026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.220 [2024-11-20 09:06:59.054033] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.220 [2024-11-20 09:06:59.054038] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.220 [2024-11-20 09:06:59.054043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42984 len:8 PRP1 0x0 PRP2 0x0 00:22:54.220 [2024-11-20 09:06:59.054050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.220 [2024-11-20 09:06:59.054056] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.220 [2024-11-20 09:06:59.054063] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.220 [2024-11-20 09:06:59.054069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:42992 len:8 PRP1 0x0 PRP2 0x0 00:22:54.220 [2024-11-20 09:06:59.054075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.220 [2024-11-20 09:06:59.054082] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.220 [2024-11-20 09:06:59.054087] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.220 [2024-11-20 09:06:59.054093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42296 len:8 PRP1 0x0 PRP2 0x0 00:22:54.220 [2024-11-20 09:06:59.054099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.220 [2024-11-20 09:06:59.054106] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.220 [2024-11-20 09:06:59.054111] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.220 [2024-11-20 09:06:59.054116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42304 len:8 PRP1 0x0 PRP2 0x0 00:22:54.220 [2024-11-20 09:06:59.054123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.220 [2024-11-20 09:06:59.054129] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.220 [2024-11-20 09:06:59.054134] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.220 [2024-11-20 09:06:59.054140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42312 len:8 PRP1 0x0 PRP2 0x0 00:22:54.220 [2024-11-20 09:06:59.054146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.220 [2024-11-20 09:06:59.054152] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.220 [2024-11-20 09:06:59.054157] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.220 [2024-11-20 09:06:59.054164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42320 len:8 PRP1 0x0 PRP2 0x0 00:22:54.220 [2024-11-20 09:06:59.054170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.220 [2024-11-20 09:06:59.054177] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.220 [2024-11-20 09:06:59.054182] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.220 [2024-11-20 09:06:59.054187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42328 len:8 PRP1 0x0 PRP2 0x0 00:22:54.220 [2024-11-20 09:06:59.054193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.220 [2024-11-20 09:06:59.054200] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.220 [2024-11-20 09:06:59.054205] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.220 [2024-11-20 09:06:59.054210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42336 len:8 PRP1 0x0 PRP2 0x0 00:22:54.220 [2024-11-20 09:06:59.054216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.220 [2024-11-20 09:06:59.065175] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.220 [2024-11-20 09:06:59.065188] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.220 [2024-11-20 09:06:59.065197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:42344 len:8 PRP1 0x0 PRP2 0x0 00:22:54.220 [2024-11-20 09:06:59.065206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.220 [2024-11-20 09:06:59.065257] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:22:54.220 [2024-11-20 09:06:59.065286] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.220 [2024-11-20 09:06:59.065296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.220 [2024-11-20 09:06:59.065306] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.220 [2024-11-20 09:06:59.065315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.220 [2024-11-20 09:06:59.065324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.220 [2024-11-20 09:06:59.065334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:06:59.065344] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.221 [2024-11-20 09:06:59.065352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:06:59.065361] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:22:54.221 [2024-11-20 09:06:59.065390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb1340 (9): Bad file descriptor 00:22:54.221 [2024-11-20 09:06:59.069273] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:22:54.221 [2024-11-20 09:06:59.099271] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:22:54.221 11012.80 IOPS, 43.02 MiB/s [2024-11-20T08:07:10.262Z] 11044.67 IOPS, 43.14 MiB/s [2024-11-20T08:07:10.262Z] 11070.86 IOPS, 43.25 MiB/s [2024-11-20T08:07:10.262Z] 11081.88 IOPS, 43.29 MiB/s [2024-11-20T08:07:10.262Z] 11110.56 IOPS, 43.40 MiB/s [2024-11-20T08:07:10.262Z] [2024-11-20 09:07:03.501515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:55632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.221 [2024-11-20 09:07:03.501555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.501571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:54688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.501579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.501587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:54696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.501594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.501602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:54704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.501609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.501617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:54712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.501623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.501632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:54720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.501643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.501651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:54728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.501657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.501666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:54736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.501672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.501680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:54744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.501687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.501694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:54752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.501701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.501709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:54760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.501715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.501723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:54768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.501730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.501738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:54776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.501745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.501753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:54784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.501760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.501768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:54792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.501775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.501783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:54800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.501790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.501798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:54808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.501804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.501813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:55640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.221 [2024-11-20 09:07:03.501820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.501828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:55648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.221 [2024-11-20 09:07:03.501837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.501845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:54816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.501851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.501859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:54824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.501866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.501873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:54832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.501880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.501888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:54840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.501895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.501904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:54848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.501910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.501919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:54856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.501926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.501934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:54864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.501940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.501953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:54872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.501960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.501968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:54880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.501975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.501982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:54888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.501989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.501997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:54896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.502004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.502012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:54904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.502018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.502028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:54912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.502035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.502044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:54920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.502051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.502060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:54928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.502066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.502074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:54936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.502081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.221 [2024-11-20 09:07:03.502089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:54944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.221 [2024-11-20 09:07:03.502095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:54952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:54960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:54968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:54976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:54984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:54992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:55000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:55008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:55016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:55024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:55032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:55040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:55048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:55056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:55064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:55072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:55080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:55088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:55096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:55104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:55112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:55120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:55128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:55136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:55144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:55152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:55160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:55168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:55176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:55184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:55200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:55208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:55216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.222 [2024-11-20 09:07:03.502602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.222 [2024-11-20 09:07:03.502610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:55224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.223 [2024-11-20 09:07:03.502616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.223 [2024-11-20 09:07:03.502624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:55232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.223 [2024-11-20 09:07:03.502631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.223 [2024-11-20 09:07:03.502639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:55240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.223 [2024-11-20 09:07:03.502645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.223 [2024-11-20 09:07:03.502653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:55248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.223 [2024-11-20 09:07:03.502663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.223 [2024-11-20 09:07:03.502671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:55256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.223 [2024-11-20 09:07:03.502677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.223 [2024-11-20 09:07:03.502685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:55264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.223 [2024-11-20 09:07:03.502692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.223 [2024-11-20 09:07:03.502700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:55272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.223 [2024-11-20 09:07:03.502707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.223 [2024-11-20 09:07:03.502714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:55280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.223 [2024-11-20 09:07:03.502721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.223 [2024-11-20 09:07:03.502729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:55288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.223 [2024-11-20 09:07:03.502735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.223 [2024-11-20 09:07:03.502743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:55296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.223 [2024-11-20 09:07:03.502750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.223 [2024-11-20 09:07:03.502758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:55304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.223 [2024-11-20 09:07:03.502766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.223 [2024-11-20 09:07:03.502774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:55312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.223 [2024-11-20 09:07:03.502782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.223 [2024-11-20 09:07:03.502790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:55320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.223 [2024-11-20 09:07:03.502796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.223 [2024-11-20 09:07:03.502805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:55328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.223 [2024-11-20 09:07:03.502811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.223 [2024-11-20 09:07:03.502819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:55336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.223 [2024-11-20 09:07:03.502825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.223 [2024-11-20 09:07:03.502833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:55344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.223 [2024-11-20 09:07:03.502839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.223 [2024-11-20 09:07:03.502847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:55352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.223 [2024-11-20 09:07:03.502854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.223 [2024-11-20 09:07:03.502862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:55360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.223 [2024-11-20 09:07:03.502868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.223 [2024-11-20 09:07:03.502876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.223 [2024-11-20 09:07:03.502883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.223 [2024-11-20 09:07:03.502890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:55376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.223 [2024-11-20 09:07:03.502898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.223 [2024-11-20 09:07:03.502906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:55384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.223 [2024-11-20 09:07:03.502913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.223 [2024-11-20 09:07:03.502921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:55392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.223 [2024-11-20 09:07:03.502927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.223 [2024-11-20 09:07:03.502935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:55400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.223 [2024-11-20 09:07:03.502942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.223 [2024-11-20 09:07:03.502953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:55408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.223 [2024-11-20 09:07:03.502960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.223 [2024-11-20 09:07:03.502970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:55416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.223 [2024-11-20 09:07:03.502976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.223 [2024-11-20 09:07:03.502984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:55424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.223 [2024-11-20 09:07:03.502991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.223 [2024-11-20 09:07:03.502999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:55432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.223 [2024-11-20 09:07:03.503006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.223 [2024-11-20 09:07:03.503014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:55440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.223 [2024-11-20 09:07:03.503021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.223 [2024-11-20 09:07:03.503029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:55448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.223 [2024-11-20 09:07:03.503036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.223 [2024-11-20 09:07:03.503043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:55456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.223 [2024-11-20 09:07:03.503050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.223 [2024-11-20 09:07:03.503058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:55464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.223 [2024-11-20 09:07:03.503064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:55472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.224 [2024-11-20 09:07:03.503078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:55480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.224 [2024-11-20 09:07:03.503093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:55488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.224 [2024-11-20 09:07:03.503107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:55496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.224 [2024-11-20 09:07:03.503121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:55656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.224 [2024-11-20 09:07:03.503137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:55664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.224 [2024-11-20 09:07:03.503152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:55672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.224 [2024-11-20 09:07:03.503167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:55680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.224 [2024-11-20 09:07:03.503181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:55688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.224 [2024-11-20 09:07:03.503195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:55696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.224 [2024-11-20 09:07:03.503209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:55704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:54.224 [2024-11-20 09:07:03.503224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:55504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.224 [2024-11-20 09:07:03.503240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:55512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.224 [2024-11-20 09:07:03.503255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:55520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.224 [2024-11-20 09:07:03.503269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:55528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.224 [2024-11-20 09:07:03.503283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:55536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.224 [2024-11-20 09:07:03.503297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:55544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.224 [2024-11-20 09:07:03.503312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:55552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.224 [2024-11-20 09:07:03.503326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.224 [2024-11-20 09:07:03.503342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:55568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.224 [2024-11-20 09:07:03.503356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:55576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.224 [2024-11-20 09:07:03.503372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:55584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.224 [2024-11-20 09:07:03.503386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:55592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.224 [2024-11-20 09:07:03.503401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:55600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.224 [2024-11-20 09:07:03.503415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:55608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.224 [2024-11-20 09:07:03.503429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:55616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:54.224 [2024-11-20 09:07:03.503444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1cde840 is same with the state(6) to be set 00:22:54.224 [2024-11-20 09:07:03.503459] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:22:54.224 [2024-11-20 09:07:03.503464] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:22:54.224 [2024-11-20 09:07:03.503471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:55624 len:8 PRP1 0x0 PRP2 0x0 00:22:54.224 [2024-11-20 09:07:03.503478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503522] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:22:54.224 [2024-11-20 09:07:03.503545] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.224 [2024-11-20 09:07:03.503552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.224 [2024-11-20 09:07:03.503566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.224 [2024-11-20 09:07:03.503582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503589] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:54.224 [2024-11-20 09:07:03.503596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:54.224 [2024-11-20 09:07:03.503603] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:22:54.224 [2024-11-20 09:07:03.506456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:22:54.224 [2024-11-20 09:07:03.506485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cb1340 (9): Bad file descriptor 00:22:54.224 [2024-11-20 09:07:03.529126] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:22:54.224 11072.40 IOPS, 43.25 MiB/s [2024-11-20T08:07:10.265Z] 11081.36 IOPS, 43.29 MiB/s [2024-11-20T08:07:10.265Z] 11092.50 IOPS, 43.33 MiB/s [2024-11-20T08:07:10.265Z] 11102.31 IOPS, 43.37 MiB/s [2024-11-20T08:07:10.265Z] 11094.57 IOPS, 43.34 MiB/s 00:22:54.224 Latency(us) 00:22:54.224 [2024-11-20T08:07:10.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.224 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:22:54.224 Verification LBA range: start 0x0 length 0x4000 00:22:54.224 NVMe0n1 : 15.00 11097.25 43.35 283.47 0.00 11224.75 436.31 21427.42 00:22:54.224 [2024-11-20T08:07:10.265Z] =================================================================================================================== 00:22:54.224 [2024-11-20T08:07:10.265Z] Total : 11097.25 43.35 283.47 0.00 11224.75 436.31 21427.42 00:22:54.224 Received shutdown signal, test time was about 15.000000 seconds 00:22:54.224 00:22:54.224 Latency(us) 00:22:54.224 [2024-11-20T08:07:10.265Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:54.224 [2024-11-20T08:07:10.266Z] =================================================================================================================== 00:22:54.225 [2024-11-20T08:07:10.266Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:22:54.225 09:07:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:22:54.225 09:07:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:22:54.225 09:07:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:22:54.225 09:07:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2431811 00:22:54.225 09:07:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:22:54.225 09:07:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2431811 /var/tmp/bdevperf.sock 00:22:54.225 09:07:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 2431811 ']' 00:22:54.225 09:07:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:22:54.225 09:07:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:54.225 09:07:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:22:54.225 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:22:54.225 09:07:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:54.225 09:07:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:22:54.225 09:07:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:54.225 09:07:09 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:22:54.225 09:07:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:22:54.225 [2024-11-20 09:07:10.011433] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:54.225 09:07:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:22:54.225 [2024-11-20 09:07:10.216050] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:22:54.484 09:07:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:54.743 NVMe0n1 00:22:54.743 09:07:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:55.001 00:22:55.001 09:07:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:22:55.264 00:22:55.523 09:07:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:55.523 09:07:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:22:55.523 09:07:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:22:55.781 09:07:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:22:59.069 09:07:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:22:59.069 09:07:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:22:59.069 09:07:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2432734 00:22:59.069 09:07:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:22:59.069 09:07:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2432734 00:23:00.005 { 00:23:00.005 "results": [ 00:23:00.005 { 00:23:00.005 "job": "NVMe0n1", 00:23:00.005 "core_mask": "0x1", 00:23:00.005 "workload": "verify", 00:23:00.005 "status": "finished", 00:23:00.005 "verify_range": { 00:23:00.005 "start": 0, 00:23:00.005 "length": 16384 00:23:00.005 }, 00:23:00.005 "queue_depth": 128, 00:23:00.005 "io_size": 4096, 00:23:00.005 "runtime": 1.006433, 00:23:00.005 "iops": 10963.471984722282, 00:23:00.005 "mibps": 42.826062440321415, 00:23:00.005 "io_failed": 0, 00:23:00.005 "io_timeout": 0, 00:23:00.005 "avg_latency_us": 11617.117996390603, 00:23:00.005 "min_latency_us": 2379.241739130435, 00:23:00.005 "max_latency_us": 12138.40695652174 00:23:00.005 } 00:23:00.005 ], 00:23:00.005 "core_count": 1 00:23:00.005 } 00:23:00.264 09:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:00.264 [2024-11-20 09:07:09.618688] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:23:00.264 [2024-11-20 09:07:09.618740] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2431811 ] 00:23:00.264 [2024-11-20 09:07:09.691349] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:00.264 [2024-11-20 09:07:09.729458] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.264 [2024-11-20 09:07:11.693765] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:00.264 [2024-11-20 09:07:11.693808] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.264 [2024-11-20 09:07:11.693820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.265 [2024-11-20 09:07:11.693829] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.265 [2024-11-20 09:07:11.693836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.265 [2024-11-20 09:07:11.693843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.265 [2024-11-20 09:07:11.693850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.265 [2024-11-20 09:07:11.693857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:00.265 [2024-11-20 09:07:11.693864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:00.265 [2024-11-20 09:07:11.693871] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:00.265 [2024-11-20 09:07:11.693894] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:00.265 [2024-11-20 09:07:11.693907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x202e340 (9): Bad file descriptor 00:23:00.265 [2024-11-20 09:07:11.743939] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:00.265 Running I/O for 1 seconds... 00:23:00.265 10906.00 IOPS, 42.60 MiB/s 00:23:00.265 Latency(us) 00:23:00.265 [2024-11-20T08:07:16.306Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:00.265 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:00.265 Verification LBA range: start 0x0 length 0x4000 00:23:00.265 NVMe0n1 : 1.01 10963.47 42.83 0.00 0.00 11617.12 2379.24 12138.41 00:23:00.265 [2024-11-20T08:07:16.306Z] =================================================================================================================== 00:23:00.265 [2024-11-20T08:07:16.306Z] Total : 10963.47 42.83 0.00 0.00 11617.12 2379.24 12138.41 00:23:00.265 09:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:00.265 09:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:00.265 09:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:00.524 09:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:00.524 09:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:00.783 09:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:01.041 09:07:16 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:04.334 09:07:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:04.334 09:07:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:04.334 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2431811 00:23:04.334 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2431811 ']' 00:23:04.334 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2431811 00:23:04.334 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:04.334 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:04.334 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2431811 00:23:04.334 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:04.334 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:04.334 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2431811' 00:23:04.334 killing process with pid 2431811 00:23:04.334 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2431811 00:23:04.334 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2431811 00:23:04.334 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:04.334 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:04.593 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:04.594 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:04.594 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:04.594 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # nvmfcleanup 00:23:04.594 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@99 -- # sync 00:23:04.594 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:23:04.594 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # set +e 00:23:04.594 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # for i in {1..20} 00:23:04.594 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:23:04.594 rmmod nvme_tcp 00:23:04.594 rmmod nvme_fabrics 00:23:04.594 rmmod nvme_keyring 00:23:04.594 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:23:04.594 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # set -e 00:23:04.594 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # return 0 00:23:04.594 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # '[' -n 2428839 ']' 00:23:04.594 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@337 -- # killprocess 2428839 00:23:04.594 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 2428839 ']' 00:23:04.594 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 2428839 00:23:04.594 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:04.594 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:04.594 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2428839 00:23:04.594 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:04.594 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:04.594 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2428839' 00:23:04.594 killing process with pid 2428839 00:23:04.594 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 2428839 00:23:04.594 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 2428839 00:23:04.853 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:23:04.853 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # nvmf_fini 00:23:04.853 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@264 -- # local dev 00:23:04.853 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@267 -- # remove_target_ns 00:23:04.853 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:04.853 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:04.853 09:07:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@268 -- # delete_main_bridge 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@130 -- # return 0 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@41 -- # _dev=0 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@41 -- # dev_map=() 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@284 -- # iptr 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@542 -- # iptables-save 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@542 -- # iptables-restore 00:23:07.407 00:23:07.407 real 0m37.691s 00:23:07.407 user 1m59.025s 00:23:07.407 sys 0m8.075s 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:07.407 ************************************ 00:23:07.407 END TEST nvmf_failover 00:23:07.407 ************************************ 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.407 ************************************ 00:23:07.407 START TEST nvmf_host_multipath_status 00:23:07.407 ************************************ 00:23:07.407 09:07:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:23:07.407 * Looking for test storage... 00:23:07.407 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:07.407 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:07.407 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:23:07.407 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:07.407 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:07.407 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:07.407 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:07.407 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:07.407 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:23:07.407 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:07.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.408 --rc genhtml_branch_coverage=1 00:23:07.408 --rc genhtml_function_coverage=1 00:23:07.408 --rc genhtml_legend=1 00:23:07.408 --rc geninfo_all_blocks=1 00:23:07.408 --rc geninfo_unexecuted_blocks=1 00:23:07.408 00:23:07.408 ' 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:07.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.408 --rc genhtml_branch_coverage=1 00:23:07.408 --rc genhtml_function_coverage=1 00:23:07.408 --rc genhtml_legend=1 00:23:07.408 --rc geninfo_all_blocks=1 00:23:07.408 --rc geninfo_unexecuted_blocks=1 00:23:07.408 00:23:07.408 ' 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:07.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.408 --rc genhtml_branch_coverage=1 00:23:07.408 --rc genhtml_function_coverage=1 00:23:07.408 --rc genhtml_legend=1 00:23:07.408 --rc geninfo_all_blocks=1 00:23:07.408 --rc geninfo_unexecuted_blocks=1 00:23:07.408 00:23:07.408 ' 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:07.408 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:07.408 --rc genhtml_branch_coverage=1 00:23:07.408 --rc genhtml_function_coverage=1 00:23:07.408 --rc genhtml_legend=1 00:23:07.408 --rc geninfo_all_blocks=1 00:23:07.408 --rc geninfo_unexecuted_blocks=1 00:23:07.408 00:23:07.408 ' 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.408 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@50 -- # : 0 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:23:07.409 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # have_pci_nics=0 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # prepare_net_devs 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # local -g is_hw=no 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # remove_target_ns 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # xtrace_disable 00:23:07.409 09:07:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@131 -- # pci_devs=() 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@131 -- # local -a pci_devs 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@132 -- # pci_net_devs=() 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@133 -- # pci_drivers=() 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@133 -- # local -A pci_drivers 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@135 -- # net_devs=() 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@135 -- # local -ga net_devs 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@136 -- # e810=() 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@136 -- # local -ga e810 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@137 -- # x722=() 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@137 -- # local -ga x722 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@138 -- # mlx=() 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@138 -- # local -ga mlx 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:13.982 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:13.982 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:23:13.982 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # [[ up == up ]] 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:13.983 Found net devices under 0000:86:00.0: cvl_0_0 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # [[ up == up ]] 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:13.983 Found net devices under 0000:86:00.1: cvl_0_1 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # is_hw=yes 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@257 -- # create_target_ns 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@27 -- # local -gA dev_map 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@28 -- # local -g _dev 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # ips=() 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772161 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:23:13.983 10.0.0.1 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772162 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:23:13.983 09:07:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:23:13.983 10.0.0.2 00:23:13.983 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:23:13.983 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:23:13.983 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:23:13.983 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:23:13.983 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:23:13.983 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:23:13.983 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:23:13.983 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:13.983 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:13.983 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:23:13.983 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:23:13.983 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:23:13.983 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:23:13.983 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:23:13.983 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@38 -- # ping_ips 1 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=initiator0 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:23:13.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:13.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.442 ms 00:23:13.984 00:23:13.984 --- 10.0.0.1 ping statistics --- 00:23:13.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.984 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev target0 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=target0 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:23:13.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:13.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:23:13.984 00:23:13.984 --- 10.0.0.2 ping statistics --- 00:23:13.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.984 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # (( pair++ )) 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # return 0 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=initiator0 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=initiator1 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # return 1 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev= 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@169 -- # return 0 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev target0 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=target0 00:23:13.984 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev target1 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=target1 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # return 1 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev= 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@169 -- # return 0 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # nvmfpid=2437208 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # waitforlisten 2437208 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2437208 ']' 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:13.985 [2024-11-20 09:07:29.303112] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:23:13.985 [2024-11-20 09:07:29.303160] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.985 [2024-11-20 09:07:29.382166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:23:13.985 [2024-11-20 09:07:29.423467] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.985 [2024-11-20 09:07:29.423504] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.985 [2024-11-20 09:07:29.423511] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:13.985 [2024-11-20 09:07:29.423518] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:13.985 [2024-11-20 09:07:29.423523] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.985 [2024-11-20 09:07:29.424690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:13.985 [2024-11-20 09:07:29.424691] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2437208 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:13.985 [2024-11-20 09:07:29.729605] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:13.985 Malloc0 00:23:13.985 09:07:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:23:14.243 09:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:14.502 09:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:14.761 [2024-11-20 09:07:30.560295] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:14.761 09:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:14.761 [2024-11-20 09:07:30.752769] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:14.761 09:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:23:14.761 09:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2437463 00:23:14.761 09:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:23:14.761 09:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2437463 /var/tmp/bdevperf.sock 00:23:14.761 09:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 2437463 ']' 00:23:14.761 09:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:14.761 09:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:14.761 09:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:14.761 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:14.761 09:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:14.761 09:07:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:15.020 09:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:15.020 09:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:23:15.020 09:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:23:15.279 09:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:15.845 Nvme0n1 00:23:15.845 09:07:31 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:23:16.104 Nvme0n1 00:23:16.104 09:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:23:16.104 09:07:32 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:23:18.637 09:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:23:18.637 09:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:18.637 09:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:18.637 09:07:34 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:23:19.573 09:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:23:19.573 09:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:19.573 09:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.573 09:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:19.832 09:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:19.832 09:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:19.832 09:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:19.832 09:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:20.091 09:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:20.091 09:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:20.091 09:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.091 09:07:35 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:20.350 09:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.350 09:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:20.350 09:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.350 09:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:20.350 09:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.350 09:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:20.350 09:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:20.350 09:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.608 09:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.608 09:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:20.608 09:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:20.608 09:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:20.867 09:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:20.867 09:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:23:20.867 09:07:36 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:21.125 09:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:21.384 09:07:37 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:23:22.319 09:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:23:22.320 09:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:22.320 09:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.320 09:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:22.578 09:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:22.578 09:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:22.578 09:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.578 09:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:22.837 09:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.837 09:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:22.837 09:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.837 09:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:22.837 09:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:22.837 09:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:22.837 09:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:22.837 09:07:38 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:23.096 09:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.096 09:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:23.096 09:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.096 09:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:23.353 09:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.353 09:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:23.353 09:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:23.354 09:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:23.612 09:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:23.612 09:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:23:23.612 09:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:23.871 09:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:24.129 09:07:39 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:23:25.065 09:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:23:25.065 09:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:25.065 09:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.065 09:07:40 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:25.323 09:07:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.323 09:07:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:25.323 09:07:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.323 09:07:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:25.323 09:07:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:25.324 09:07:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:25.324 09:07:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.324 09:07:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:25.582 09:07:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.582 09:07:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:25.582 09:07:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:25.582 09:07:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:25.841 09:07:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:25.841 09:07:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:25.841 09:07:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:25.841 09:07:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.100 09:07:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:26.100 09:07:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:26.100 09:07:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:26.100 09:07:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:26.358 09:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:26.358 09:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:23:26.358 09:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:26.358 09:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:26.617 09:07:42 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:23:27.553 09:07:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:23:27.553 09:07:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:27.811 09:07:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:27.811 09:07:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:27.811 09:07:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:27.811 09:07:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:27.812 09:07:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:27.812 09:07:43 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.070 09:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:28.070 09:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:28.070 09:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.070 09:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:28.329 09:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.329 09:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:28.329 09:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.329 09:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:28.588 09:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.588 09:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:28.588 09:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.588 09:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:28.847 09:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:28.847 09:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:28.847 09:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:28.847 09:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:29.106 09:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:29.106 09:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:23:29.106 09:07:44 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:29.106 09:07:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:29.364 09:07:45 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:23:30.389 09:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:23:30.389 09:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:30.389 09:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.389 09:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:30.647 09:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:30.647 09:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:30.647 09:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.647 09:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:30.906 09:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:30.906 09:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:30.906 09:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.906 09:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:30.906 09:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:30.906 09:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:30.906 09:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:30.906 09:07:46 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:31.165 09:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:31.165 09:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:31.165 09:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:31.165 09:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:31.425 09:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:31.425 09:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:31.425 09:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:31.425 09:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:31.683 09:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:31.683 09:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:23:31.683 09:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:23:31.941 09:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:31.941 09:07:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:23:33.317 09:07:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:23:33.317 09:07:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:33.317 09:07:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.318 09:07:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:33.318 09:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:33.318 09:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:33.318 09:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.318 09:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:33.318 09:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:33.318 09:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:33.576 09:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.576 09:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:33.576 09:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:33.576 09:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:33.576 09:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.576 09:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:33.835 09:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:33.835 09:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:23:33.835 09:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:33.835 09:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:34.093 09:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:34.093 09:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:34.093 09:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:34.093 09:07:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:34.352 09:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:34.352 09:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:23:34.610 09:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:23:34.610 09:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:23:34.610 09:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:34.869 09:07:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:23:35.807 09:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:23:35.807 09:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:35.807 09:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:35.807 09:07:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:36.066 09:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.066 09:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:36.066 09:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:36.066 09:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.325 09:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.325 09:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:36.325 09:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:36.325 09:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.584 09:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.584 09:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:36.584 09:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.584 09:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:36.843 09:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:36.843 09:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:36.843 09:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:36.843 09:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:37.101 09:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.101 09:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:37.101 09:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:37.101 09:07:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:37.101 09:07:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:37.101 09:07:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:23:37.101 09:07:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:37.360 09:07:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:23:37.619 09:07:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:23:38.555 09:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:23:38.555 09:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:23:38.555 09:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.555 09:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:38.814 09:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:38.814 09:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:38.814 09:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:38.814 09:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:39.072 09:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.072 09:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:39.072 09:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.072 09:07:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:39.331 09:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.331 09:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:39.331 09:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.331 09:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:39.591 09:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.591 09:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:39.591 09:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.591 09:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:39.591 09:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.591 09:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:39.591 09:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:39.591 09:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:39.850 09:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:39.850 09:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:23:39.850 09:07:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:40.109 09:07:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:23:40.367 09:07:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:23:41.304 09:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:23:41.304 09:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:41.304 09:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.304 09:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:41.563 09:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.563 09:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:23:41.563 09:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.563 09:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:41.822 09:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.822 09:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:41.822 09:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.822 09:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:41.822 09:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:41.822 09:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:41.822 09:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:41.822 09:07:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:42.081 09:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.081 09:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:42.081 09:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:42.081 09:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.339 09:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.339 09:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:23:42.339 09:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:42.340 09:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:42.598 09:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:42.598 09:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:23:42.598 09:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:23:42.857 09:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:23:43.116 09:07:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:23:44.051 09:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:23:44.051 09:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:23:44.051 09:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.051 09:07:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:23:44.310 09:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.310 09:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:23:44.310 09:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.310 09:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:23:44.310 09:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:44.310 09:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:23:44.310 09:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.310 09:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:23:44.570 09:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.570 09:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:23:44.570 09:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:44.570 09:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:23:44.829 09:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:44.829 09:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:23:44.829 09:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:23:44.829 09:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.088 09:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:23:45.088 09:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:23:45.088 09:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:23:45.088 09:08:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:23:45.347 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:23:45.347 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2437463 00:23:45.347 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2437463 ']' 00:23:45.347 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2437463 00:23:45.347 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:45.347 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:45.347 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2437463 00:23:45.347 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:23:45.347 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:23:45.347 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2437463' 00:23:45.347 killing process with pid 2437463 00:23:45.347 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2437463 00:23:45.347 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2437463 00:23:45.347 { 00:23:45.347 "results": [ 00:23:45.347 { 00:23:45.347 "job": "Nvme0n1", 00:23:45.347 "core_mask": "0x4", 00:23:45.348 "workload": "verify", 00:23:45.348 "status": "terminated", 00:23:45.348 "verify_range": { 00:23:45.348 "start": 0, 00:23:45.348 "length": 16384 00:23:45.348 }, 00:23:45.348 "queue_depth": 128, 00:23:45.348 "io_size": 4096, 00:23:45.348 "runtime": 29.046898, 00:23:45.348 "iops": 10412.643718444566, 00:23:45.348 "mibps": 40.674389525174085, 00:23:45.348 "io_failed": 0, 00:23:45.348 "io_timeout": 0, 00:23:45.348 "avg_latency_us": 12272.183257490695, 00:23:45.348 "min_latency_us": 416.72347826086957, 00:23:45.348 "max_latency_us": 3078254.4139130437 00:23:45.348 } 00:23:45.348 ], 00:23:45.348 "core_count": 1 00:23:45.348 } 00:23:45.634 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2437463 00:23:45.634 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:45.634 [2024-11-20 09:07:30.821612] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:23:45.634 [2024-11-20 09:07:30.821666] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2437463 ] 00:23:45.634 [2024-11-20 09:07:30.898404] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:45.634 [2024-11-20 09:07:30.939035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:45.634 Running I/O for 90 seconds... 00:23:45.634 11310.00 IOPS, 44.18 MiB/s [2024-11-20T08:08:01.675Z] 11311.50 IOPS, 44.19 MiB/s [2024-11-20T08:08:01.675Z] 11330.67 IOPS, 44.26 MiB/s [2024-11-20T08:08:01.675Z] 11362.75 IOPS, 44.39 MiB/s [2024-11-20T08:08:01.675Z] 11374.60 IOPS, 44.43 MiB/s [2024-11-20T08:08:01.675Z] 11353.33 IOPS, 44.35 MiB/s [2024-11-20T08:08:01.675Z] 11348.57 IOPS, 44.33 MiB/s [2024-11-20T08:08:01.675Z] 11337.50 IOPS, 44.29 MiB/s [2024-11-20T08:08:01.675Z] 11310.22 IOPS, 44.18 MiB/s [2024-11-20T08:08:01.675Z] 11313.00 IOPS, 44.19 MiB/s [2024-11-20T08:08:01.675Z] 11295.27 IOPS, 44.12 MiB/s [2024-11-20T08:08:01.675Z] 11316.25 IOPS, 44.20 MiB/s [2024-11-20T08:08:01.675Z] [2024-11-20 09:07:45.104895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.634 [2024-11-20 09:07:45.104933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:45.634 [2024-11-20 09:07:45.104975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.634 [2024-11-20 09:07:45.104985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:45.634 [2024-11-20 09:07:45.104998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.634 [2024-11-20 09:07:45.105006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:45.634 [2024-11-20 09:07:45.105019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.634 [2024-11-20 09:07:45.105026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:45.634 [2024-11-20 09:07:45.105039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.634 [2024-11-20 09:07:45.105046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:45.634 [2024-11-20 09:07:45.105058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.634 [2024-11-20 09:07:45.105065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:45.634 [2024-11-20 09:07:45.105078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.634 [2024-11-20 09:07:45.105085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:45.634 [2024-11-20 09:07:45.105097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.634 [2024-11-20 09:07:45.105105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:45.634 [2024-11-20 09:07:45.105117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.634 [2024-11-20 09:07:45.105124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:45.634 [2024-11-20 09:07:45.105137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.634 [2024-11-20 09:07:45.105149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:45.634 [2024-11-20 09:07:45.105162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.634 [2024-11-20 09:07:45.105169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:45.634 [2024-11-20 09:07:45.105182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.634 [2024-11-20 09:07:45.105189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:45.634 [2024-11-20 09:07:45.105202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.634 [2024-11-20 09:07:45.105209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:45.634 [2024-11-20 09:07:45.105222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.634 [2024-11-20 09:07:45.105228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:45.634 [2024-11-20 09:07:45.105241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.634 [2024-11-20 09:07:45.105248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:45.634 [2024-11-20 09:07:45.105261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.634 [2024-11-20 09:07:45.105268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:45.634 [2024-11-20 09:07:45.105616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.634 [2024-11-20 09:07:45.105629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.105645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.105652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.105665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.105673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.105686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.105692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.105705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.105713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.105726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.105736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.105749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.105756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.105768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.105775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.105788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.105795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.105808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.105814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.105827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.105834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.105847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.105853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.105866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.105873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.105885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.105893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.105905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.105912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.105925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.105931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.105944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.105956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.105969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.105977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.105991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.105999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.106011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.106018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.106031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.106038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.106050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.106057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.106069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.106076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.106089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.106096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.106108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.106115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.106127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.106134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.106147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.106154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.106166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.106173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.106185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.106192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.106205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.106212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.106226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.106234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.106246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.106253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.106266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.106272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.106284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.106292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.106304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:121208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.106311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.106323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.106330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.106343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.106350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.106362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.106369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.106381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.635 [2024-11-20 09:07:45.106388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:45.635 [2024-11-20 09:07:45.106400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:121248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.636 [2024-11-20 09:07:45.106407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.106420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.636 [2024-11-20 09:07:45.106426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.106439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.636 [2024-11-20 09:07:45.106445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.106458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.636 [2024-11-20 09:07:45.106469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.106481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.636 [2024-11-20 09:07:45.106488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.106501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.636 [2024-11-20 09:07:45.106508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.106520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.636 [2024-11-20 09:07:45.106527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.106539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.636 [2024-11-20 09:07:45.106546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.106558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.636 [2024-11-20 09:07:45.106566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.106992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.636 [2024-11-20 09:07:45.107005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.107019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.636 [2024-11-20 09:07:45.107027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.107040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.636 [2024-11-20 09:07:45.107048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.107061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.636 [2024-11-20 09:07:45.107067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.107080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.636 [2024-11-20 09:07:45.107087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.107100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.636 [2024-11-20 09:07:45.107107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.107119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.636 [2024-11-20 09:07:45.107129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.107141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.636 [2024-11-20 09:07:45.107148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.107161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.636 [2024-11-20 09:07:45.107168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.107180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.636 [2024-11-20 09:07:45.107187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.107199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.636 [2024-11-20 09:07:45.107206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.107219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.636 [2024-11-20 09:07:45.107226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.107239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.636 [2024-11-20 09:07:45.107245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.107258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.636 [2024-11-20 09:07:45.107265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.107277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.636 [2024-11-20 09:07:45.107284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.107297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.636 [2024-11-20 09:07:45.107304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.107317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.636 [2024-11-20 09:07:45.107323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.107336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.636 [2024-11-20 09:07:45.107343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.107356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.636 [2024-11-20 09:07:45.107364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.107377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.636 [2024-11-20 09:07:45.107384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.107396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.636 [2024-11-20 09:07:45.107403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.107416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.636 [2024-11-20 09:07:45.107423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.107435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.636 [2024-11-20 09:07:45.107442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.107454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.636 [2024-11-20 09:07:45.107461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.107474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.636 [2024-11-20 09:07:45.107481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.107494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.636 [2024-11-20 09:07:45.107500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.107513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.636 [2024-11-20 09:07:45.107520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.107532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.636 [2024-11-20 09:07:45.107539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.107552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.636 [2024-11-20 09:07:45.107558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:45.636 [2024-11-20 09:07:45.107571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.636 [2024-11-20 09:07:45.107578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.107590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-11-20 09:07:45.107597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.107611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-11-20 09:07:45.107617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.107630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-11-20 09:07:45.107637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.107649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-11-20 09:07:45.107657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.107670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-11-20 09:07:45.107676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.107689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-11-20 09:07:45.107696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.107710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-11-20 09:07:45.107716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.107729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:120720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-11-20 09:07:45.107736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.107748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:120728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-11-20 09:07:45.107755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.107768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-11-20 09:07:45.107775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.107787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:120744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-11-20 09:07:45.107794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.107807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-11-20 09:07:45.107814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.107826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-11-20 09:07:45.107833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.107847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.637 [2024-11-20 09:07:45.107854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.107866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.637 [2024-11-20 09:07:45.107873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.107885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.637 [2024-11-20 09:07:45.107892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.107905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.637 [2024-11-20 09:07:45.107912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.108282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.637 [2024-11-20 09:07:45.108293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.108307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.637 [2024-11-20 09:07:45.108315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.108327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.637 [2024-11-20 09:07:45.108335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.108347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:121472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.637 [2024-11-20 09:07:45.108354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.108368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:121480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.637 [2024-11-20 09:07:45.108374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.108387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-11-20 09:07:45.108394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.108407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-11-20 09:07:45.108414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.108426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-11-20 09:07:45.108433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.108446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-11-20 09:07:45.108455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.108468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-11-20 09:07:45.108475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.108487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-11-20 09:07:45.108494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.108507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.637 [2024-11-20 09:07:45.108513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.108526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:121496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.637 [2024-11-20 09:07:45.108533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.108545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.637 [2024-11-20 09:07:45.108552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.108564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.637 [2024-11-20 09:07:45.108571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.108584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.637 [2024-11-20 09:07:45.108590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.108603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:121528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.637 [2024-11-20 09:07:45.108610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.108622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.637 [2024-11-20 09:07:45.108629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.108642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.637 [2024-11-20 09:07:45.108649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.108662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.637 [2024-11-20 09:07:45.108669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.108682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.637 [2024-11-20 09:07:45.108690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:45.637 [2024-11-20 09:07:45.108703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.108710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.108722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.108730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.108742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.108749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.108762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.108769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.108781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.108788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.108800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.108808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.108820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.108827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.108839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.108846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.108859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.108866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.108878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.108885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.108897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.108904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.108917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.108925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.108938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.108945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.108962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.108969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.108981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.108988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.109009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.109016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.109028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.109035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.109048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.109055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.109396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.109407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.109422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.109429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.109441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.109449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.109462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.109468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.109481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.109488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.109500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.109507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.109522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.109529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.109542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.109548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.109561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.109568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.109580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.109587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.109599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.109606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.109619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.109627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.109640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.109647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.109661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.109668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.109681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.109688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:45.638 [2024-11-20 09:07:45.109700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.638 [2024-11-20 09:07:45.109707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.109720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.109727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.109739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.109746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.109760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.109767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.109780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.109787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.109799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.109806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.109818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.109825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.109837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.109844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.109857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.109864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.109876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.109883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.109895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.109902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.109914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.109921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.109934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.109943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.109961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.109968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.109982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.109989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.110002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.119070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.119090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.119099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.119115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.119123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.119139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:121248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.119147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.119162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.119171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.119186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.119195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.119209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.119218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.119233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.119242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.119257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.119265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.119281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.119289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.119754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.119770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.119787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.119796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.119812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.119824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.119839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.119849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.119864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.119873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.119889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.119898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.119913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.119921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.119937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.119945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.119969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.119978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.119993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.120001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.120017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.120025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.120040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.120048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.120063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.120072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.120088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.639 [2024-11-20 09:07:45.120096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:45.639 [2024-11-20 09:07:45.120111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.640 [2024-11-20 09:07:45.120119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.640 [2024-11-20 09:07:45.120146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.640 [2024-11-20 09:07:45.120169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.640 [2024-11-20 09:07:45.120193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.640 [2024-11-20 09:07:45.120217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.640 [2024-11-20 09:07:45.120241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.640 [2024-11-20 09:07:45.120265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.640 [2024-11-20 09:07:45.120289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.640 [2024-11-20 09:07:45.120312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.640 [2024-11-20 09:07:45.120336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.640 [2024-11-20 09:07:45.120360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.640 [2024-11-20 09:07:45.120383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.640 [2024-11-20 09:07:45.120407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.640 [2024-11-20 09:07:45.120433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.640 [2024-11-20 09:07:45.120456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.640 [2024-11-20 09:07:45.120480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.640 [2024-11-20 09:07:45.120504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.640 [2024-11-20 09:07:45.120527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.640 [2024-11-20 09:07:45.120551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.640 [2024-11-20 09:07:45.120574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.640 [2024-11-20 09:07:45.120598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:120688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.640 [2024-11-20 09:07:45.120622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.640 [2024-11-20 09:07:45.120646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.640 [2024-11-20 09:07:45.120669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.640 [2024-11-20 09:07:45.120693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.640 [2024-11-20 09:07:45.120719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.640 [2024-11-20 09:07:45.120742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.640 [2024-11-20 09:07:45.120766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.640 [2024-11-20 09:07:45.120790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.640 [2024-11-20 09:07:45.120813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.640 [2024-11-20 09:07:45.120837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.640 [2024-11-20 09:07:45.120861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.640 [2024-11-20 09:07:45.120884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:121432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.640 [2024-11-20 09:07:45.120908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.640 [2024-11-20 09:07:45.120931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.640 [2024-11-20 09:07:45.120959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.640 [2024-11-20 09:07:45.120982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.120997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.640 [2024-11-20 09:07:45.121011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:45.640 [2024-11-20 09:07:45.121026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:121472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.640 [2024-11-20 09:07:45.121035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.641 [2024-11-20 09:07:45.121059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.641 [2024-11-20 09:07:45.121083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.641 [2024-11-20 09:07:45.121107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.641 [2024-11-20 09:07:45.121131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.641 [2024-11-20 09:07:45.121154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.641 [2024-11-20 09:07:45.121178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.641 [2024-11-20 09:07:45.121202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:121488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.641 [2024-11-20 09:07:45.121225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:121496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.641 [2024-11-20 09:07:45.121249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:121504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.641 [2024-11-20 09:07:45.121273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.641 [2024-11-20 09:07:45.121299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.641 [2024-11-20 09:07:45.121323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:121528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.641 [2024-11-20 09:07:45.121347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.641 [2024-11-20 09:07:45.121370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.641 [2024-11-20 09:07:45.121394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.641 [2024-11-20 09:07:45.121418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.641 [2024-11-20 09:07:45.121442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.641 [2024-11-20 09:07:45.121465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.641 [2024-11-20 09:07:45.121489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.641 [2024-11-20 09:07:45.121513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.641 [2024-11-20 09:07:45.121536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.641 [2024-11-20 09:07:45.121560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.641 [2024-11-20 09:07:45.121583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.641 [2024-11-20 09:07:45.121609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.641 [2024-11-20 09:07:45.121632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.641 [2024-11-20 09:07:45.121656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.641 [2024-11-20 09:07:45.121679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.641 [2024-11-20 09:07:45.121703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.641 [2024-11-20 09:07:45.121727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.641 [2024-11-20 09:07:45.121751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.641 [2024-11-20 09:07:45.121776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.641 [2024-11-20 09:07:45.121800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.641 [2024-11-20 09:07:45.121824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.121839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.641 [2024-11-20 09:07:45.121848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.122678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.641 [2024-11-20 09:07:45.122695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.122716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.641 [2024-11-20 09:07:45.122725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.122740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.641 [2024-11-20 09:07:45.122749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.641 [2024-11-20 09:07:45.122764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.641 [2024-11-20 09:07:45.122774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.122789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.122797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.122813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.122821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.122836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.122845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.122860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.122869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.122884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.122892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.122908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.122917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.122932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.122940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.122961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.122970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.122985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.122996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.123013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.123022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.123037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.123046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.123061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.123070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.123085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.123093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.123108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.123117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.123132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.123141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.123156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.123164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.123180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.123188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.123203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.123212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.123227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.123236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.123251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.123259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.123275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.123283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.123298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.123308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.123324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.123332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.123347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.123355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.123370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.123380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.123395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.123404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.123419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.123427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.123443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.123451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.123466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.123475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.123490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.123498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.123513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.123522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.123537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.123546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.123561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.123570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.123585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.123595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.123610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.123619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.123634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.123643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.124095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.124110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.124127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:121304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.124136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:45.642 [2024-11-20 09:07:45.124151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.642 [2024-11-20 09:07:45.124160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.643 [2024-11-20 09:07:45.124184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.643 [2024-11-20 09:07:45.124207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.643 [2024-11-20 09:07:45.124232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.643 [2024-11-20 09:07:45.124255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.643 [2024-11-20 09:07:45.124278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.643 [2024-11-20 09:07:45.124302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.643 [2024-11-20 09:07:45.124325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.643 [2024-11-20 09:07:45.124352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.643 [2024-11-20 09:07:45.124375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.643 [2024-11-20 09:07:45.124399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.643 [2024-11-20 09:07:45.124422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.643 [2024-11-20 09:07:45.124446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.643 [2024-11-20 09:07:45.124470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.643 [2024-11-20 09:07:45.124493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.643 [2024-11-20 09:07:45.124517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.643 [2024-11-20 09:07:45.124540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.643 [2024-11-20 09:07:45.124564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.643 [2024-11-20 09:07:45.124588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.643 [2024-11-20 09:07:45.124611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.643 [2024-11-20 09:07:45.124636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.643 [2024-11-20 09:07:45.124660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.643 [2024-11-20 09:07:45.124684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.643 [2024-11-20 09:07:45.124707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.643 [2024-11-20 09:07:45.124731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.643 [2024-11-20 09:07:45.124757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.643 [2024-11-20 09:07:45.124780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.643 [2024-11-20 09:07:45.124804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.643 [2024-11-20 09:07:45.124827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.643 [2024-11-20 09:07:45.124851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.643 [2024-11-20 09:07:45.124874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.643 [2024-11-20 09:07:45.124898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.643 [2024-11-20 09:07:45.124924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.643 [2024-11-20 09:07:45.124952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:45.643 [2024-11-20 09:07:45.124968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.644 [2024-11-20 09:07:45.124976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.124992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.644 [2024-11-20 09:07:45.125000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.125015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.644 [2024-11-20 09:07:45.125023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.125039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.644 [2024-11-20 09:07:45.125047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.125062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.644 [2024-11-20 09:07:45.125071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.125086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.644 [2024-11-20 09:07:45.125095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.125110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.644 [2024-11-20 09:07:45.125118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.125133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:120744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.644 [2024-11-20 09:07:45.125143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.125159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.644 [2024-11-20 09:07:45.125167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.125183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.644 [2024-11-20 09:07:45.125191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.125625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.644 [2024-11-20 09:07:45.125639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.125656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.644 [2024-11-20 09:07:45.125665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.125680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.644 [2024-11-20 09:07:45.125689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.125704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.644 [2024-11-20 09:07:45.125713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.125728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.644 [2024-11-20 09:07:45.125736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.125751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.644 [2024-11-20 09:07:45.125759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.125774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.644 [2024-11-20 09:07:45.125785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.125800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.644 [2024-11-20 09:07:45.125808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.125823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:121480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.644 [2024-11-20 09:07:45.125832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.125847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:120768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.644 [2024-11-20 09:07:45.125855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.125870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:120776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.644 [2024-11-20 09:07:45.125879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.125894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.644 [2024-11-20 09:07:45.125903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.125918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.644 [2024-11-20 09:07:45.125928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.125943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.644 [2024-11-20 09:07:45.125965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.125981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.644 [2024-11-20 09:07:45.125990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.126005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:121488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.644 [2024-11-20 09:07:45.126026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.126043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.644 [2024-11-20 09:07:45.126052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.126068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:121504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.644 [2024-11-20 09:07:45.126077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.126092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.644 [2024-11-20 09:07:45.126101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.126117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.644 [2024-11-20 09:07:45.126125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.126141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:121528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.644 [2024-11-20 09:07:45.126150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.126166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.644 [2024-11-20 09:07:45.126175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.126190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.644 [2024-11-20 09:07:45.126200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.126216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.644 [2024-11-20 09:07:45.126225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.126241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.644 [2024-11-20 09:07:45.126250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.126267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.644 [2024-11-20 09:07:45.126276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.126292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.644 [2024-11-20 09:07:45.126300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.126316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.644 [2024-11-20 09:07:45.126325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:45.644 [2024-11-20 09:07:45.126341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.644 [2024-11-20 09:07:45.126349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.126366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.126375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.126391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.126400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.126415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.126424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.126440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.126449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.126465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.126473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.126489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.126498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.126514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.126522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.126538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.126547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.126564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.126573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.131199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.131212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.131229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.131240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.131255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.131264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.131280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.131288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.131304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.131313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.131329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.131337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.131353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.131362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.131378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.131386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.131402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.131411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.131426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.131435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.131451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.131460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.131477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.131486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.131502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.131511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.131526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.131535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.131551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.131559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.131575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.131584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.131600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.131609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.131624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.131633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.131649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.131658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.131673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.131682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.131698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.131706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.131722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.131731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.131746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.131755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.131771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.131784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.131800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.131809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.132524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.132543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.132561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.132570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.132586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.132595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.132611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.645 [2024-11-20 09:07:45.132620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:45.645 [2024-11-20 09:07:45.132636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.646 [2024-11-20 09:07:45.132645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.132661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.646 [2024-11-20 09:07:45.132670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.132686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.646 [2024-11-20 09:07:45.132694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.132710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.646 [2024-11-20 09:07:45.132719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.132735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:121208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.646 [2024-11-20 09:07:45.132744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.132759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.646 [2024-11-20 09:07:45.132768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.132784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.646 [2024-11-20 09:07:45.132795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.132812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.646 [2024-11-20 09:07:45.132821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.132837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.646 [2024-11-20 09:07:45.132845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.132861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.646 [2024-11-20 09:07:45.132870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.132886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.646 [2024-11-20 09:07:45.132895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.132910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.646 [2024-11-20 09:07:45.132919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.132935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.646 [2024-11-20 09:07:45.132944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.132967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.646 [2024-11-20 09:07:45.132976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.132992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.646 [2024-11-20 09:07:45.133001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.133017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.646 [2024-11-20 09:07:45.133026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.133041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.646 [2024-11-20 09:07:45.133050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.133066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.646 [2024-11-20 09:07:45.133075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.133091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.646 [2024-11-20 09:07:45.133099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.133117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.646 [2024-11-20 09:07:45.133126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.133142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.646 [2024-11-20 09:07:45.133151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.133166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.646 [2024-11-20 09:07:45.133176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.133191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.646 [2024-11-20 09:07:45.133200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.133216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.646 [2024-11-20 09:07:45.133225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.133241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.646 [2024-11-20 09:07:45.133250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.133266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.646 [2024-11-20 09:07:45.133275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.133291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.646 [2024-11-20 09:07:45.133300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.133316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.646 [2024-11-20 09:07:45.133324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.133340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.646 [2024-11-20 09:07:45.133349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.133365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.646 [2024-11-20 09:07:45.133374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.133389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.646 [2024-11-20 09:07:45.133398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.133416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.646 [2024-11-20 09:07:45.133425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.133441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.646 [2024-11-20 09:07:45.133450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.133468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.646 [2024-11-20 09:07:45.133477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.133493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.646 [2024-11-20 09:07:45.133501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.133517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.646 [2024-11-20 09:07:45.133527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.133543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.646 [2024-11-20 09:07:45.133551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.133567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.646 [2024-11-20 09:07:45.133576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:45.646 [2024-11-20 09:07:45.133592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.646 [2024-11-20 09:07:45.133600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.133616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.647 [2024-11-20 09:07:45.133625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.133641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.647 [2024-11-20 09:07:45.133649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.133665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.647 [2024-11-20 09:07:45.133674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.133690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.647 [2024-11-20 09:07:45.133698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.133714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.647 [2024-11-20 09:07:45.133725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.133741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.647 [2024-11-20 09:07:45.133750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.133765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.647 [2024-11-20 09:07:45.133774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.133790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.647 [2024-11-20 09:07:45.133799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.133814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.647 [2024-11-20 09:07:45.133823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.133839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.647 [2024-11-20 09:07:45.133848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.133864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.647 [2024-11-20 09:07:45.133872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.133888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.647 [2024-11-20 09:07:45.133897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.133912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.647 [2024-11-20 09:07:45.133922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.133937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.647 [2024-11-20 09:07:45.133950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.133966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.647 [2024-11-20 09:07:45.133975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.133991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.647 [2024-11-20 09:07:45.134000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.134016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.647 [2024-11-20 09:07:45.134027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.134043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.647 [2024-11-20 09:07:45.134052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.134068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.647 [2024-11-20 09:07:45.134076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.134092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:120744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.647 [2024-11-20 09:07:45.134102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.134119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.647 [2024-11-20 09:07:45.134127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.134756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.647 [2024-11-20 09:07:45.134772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.134789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.647 [2024-11-20 09:07:45.134798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.134814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.647 [2024-11-20 09:07:45.134824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.134841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.647 [2024-11-20 09:07:45.134850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.134868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.647 [2024-11-20 09:07:45.134878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.134895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.647 [2024-11-20 09:07:45.134904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.134921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.647 [2024-11-20 09:07:45.134930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.134955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.647 [2024-11-20 09:07:45.134967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.134984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.647 [2024-11-20 09:07:45.134994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.135010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.647 [2024-11-20 09:07:45.135019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.135035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.647 [2024-11-20 09:07:45.135045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.135063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:120776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.647 [2024-11-20 09:07:45.135072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.135088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.647 [2024-11-20 09:07:45.135097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:45.647 [2024-11-20 09:07:45.135113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.648 [2024-11-20 09:07:45.135122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.648 [2024-11-20 09:07:45.135148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.648 [2024-11-20 09:07:45.135172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.135197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:121496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.135222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.135248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.135272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.135301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.135330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.648 [2024-11-20 09:07:45.135355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.135380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.135407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.135433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.135459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.135484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.135508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.135532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.135557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.135581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.135608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.135632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.135656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.135681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.135705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.135729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.135754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.135779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.135803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.135828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.135852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.135876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.135902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.135927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.135956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.135980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.135996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.136004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.136020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.136029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.136045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.136053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:45.648 [2024-11-20 09:07:45.136069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.648 [2024-11-20 09:07:45.136078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.136109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.136119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.136138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.136148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.136166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.136177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.136195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.136205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.136224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.136237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.136255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.136265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.136284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.136294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.136313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.136323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.136341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.136352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.136370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.136380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.136399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.136410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.137209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.137229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.137250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.137261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.137280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.137290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.137309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.137320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.137339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.137349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.137367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.137378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.137400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.137411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.137429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.137440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.137459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.137469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.137488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:121208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.137498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.137517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.137527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.137546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.137557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.137575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.137585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.137604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.137614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.137633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:121248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.137643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.137662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.137672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.137691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.137702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.137721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.137732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.137752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.137763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.137782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.137792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.137811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.137821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.137840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:121304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.137850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.137869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.137879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.137898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.137908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.137927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.137937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.137961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.137972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.137990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.649 [2024-11-20 09:07:45.138001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:45.649 [2024-11-20 09:07:45.138020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.650 [2024-11-20 09:07:45.138030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.650 [2024-11-20 09:07:45.138059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.650 [2024-11-20 09:07:45.138088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.650 [2024-11-20 09:07:45.138120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.650 [2024-11-20 09:07:45.138149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.650 [2024-11-20 09:07:45.138178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.650 [2024-11-20 09:07:45.138207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.650 [2024-11-20 09:07:45.138236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.650 [2024-11-20 09:07:45.138265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.650 [2024-11-20 09:07:45.138294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.650 [2024-11-20 09:07:45.138324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.650 [2024-11-20 09:07:45.138353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.650 [2024-11-20 09:07:45.138382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.650 [2024-11-20 09:07:45.138412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.650 [2024-11-20 09:07:45.138441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.650 [2024-11-20 09:07:45.138472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.650 [2024-11-20 09:07:45.138501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.650 [2024-11-20 09:07:45.138530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.650 [2024-11-20 09:07:45.138559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.650 [2024-11-20 09:07:45.138588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.650 [2024-11-20 09:07:45.138617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.650 [2024-11-20 09:07:45.138647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.650 [2024-11-20 09:07:45.138676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.650 [2024-11-20 09:07:45.138705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.650 [2024-11-20 09:07:45.138734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.650 [2024-11-20 09:07:45.138763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.650 [2024-11-20 09:07:45.138792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.650 [2024-11-20 09:07:45.138825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.650 [2024-11-20 09:07:45.138854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.650 [2024-11-20 09:07:45.138884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.650 [2024-11-20 09:07:45.138913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.650 [2024-11-20 09:07:45.138942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.650 [2024-11-20 09:07:45.138976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.138994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.650 [2024-11-20 09:07:45.139005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.139024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.650 [2024-11-20 09:07:45.139034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.139053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.650 [2024-11-20 09:07:45.139063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.139082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.650 [2024-11-20 09:07:45.139092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.139111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.650 [2024-11-20 09:07:45.139121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:45.650 [2024-11-20 09:07:45.139140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.650 [2024-11-20 09:07:45.139150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.139169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.139182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.139201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:121424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.139212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.139231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.139242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.139262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.139273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.139293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.139304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.139323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.139335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.139355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.139366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.140207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.140224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.140245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.140257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.140276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:120768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.651 [2024-11-20 09:07:45.140286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.140305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:120776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.651 [2024-11-20 09:07:45.140315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.140334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.651 [2024-11-20 09:07:45.140344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.140363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.651 [2024-11-20 09:07:45.140373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.140395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.651 [2024-11-20 09:07:45.140407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.140428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:120808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.651 [2024-11-20 09:07:45.140438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.140457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:121488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.140467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.140486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.140496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.140515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.140525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.140544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.140555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.140573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.140585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.140603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:121528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.140614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.140632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.651 [2024-11-20 09:07:45.140643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.140662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.140673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.140693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.140704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.140723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.140734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.140754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.140765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.140785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.140795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.140814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.140825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.140844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.140854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.140873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.140884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.140904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.140915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.140934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.140945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.140972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.140982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.141001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.141012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.141031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.141041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.141060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.141070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.141089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.141100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.141118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.141131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:45.651 [2024-11-20 09:07:45.141150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.651 [2024-11-20 09:07:45.141161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.141180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.141190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.141209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.141220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.141238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.141249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.141267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.141278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.141297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.141307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.141326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.141337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.141355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.141366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.141384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.141395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.141414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.141424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.141443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.141453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.141472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.141484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.141503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.141513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.141532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.141542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.141561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.141571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.141590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.141601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.141620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.141630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.141649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.141660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.141678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.141688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.141707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.141717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.141736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.141746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.141765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.141775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.141794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.141805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.142493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.142511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.142535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.142546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.142565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.142576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.142595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.142605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.142623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.142634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.142653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.142663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.142682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.142692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.142711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.142722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.142740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.142751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.142770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.142781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.142800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.142811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:45.652 [2024-11-20 09:07:45.142830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.652 [2024-11-20 09:07:45.142840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.142859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.653 [2024-11-20 09:07:45.142869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.142891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.653 [2024-11-20 09:07:45.142901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.142919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.653 [2024-11-20 09:07:45.142930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.142954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.653 [2024-11-20 09:07:45.142965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.142983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.653 [2024-11-20 09:07:45.142994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.653 [2024-11-20 09:07:45.143023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.653 [2024-11-20 09:07:45.143052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.653 [2024-11-20 09:07:45.143081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.653 [2024-11-20 09:07:45.143112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.653 [2024-11-20 09:07:45.143141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.653 [2024-11-20 09:07:45.143171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.653 [2024-11-20 09:07:45.143200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.653 [2024-11-20 09:07:45.143229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.653 [2024-11-20 09:07:45.143261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.653 [2024-11-20 09:07:45.143291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.653 [2024-11-20 09:07:45.143320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.653 [2024-11-20 09:07:45.143349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.653 [2024-11-20 09:07:45.143378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.653 [2024-11-20 09:07:45.143407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.653 [2024-11-20 09:07:45.143436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.653 [2024-11-20 09:07:45.143465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.653 [2024-11-20 09:07:45.143494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.653 [2024-11-20 09:07:45.143523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.653 [2024-11-20 09:07:45.143551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.653 [2024-11-20 09:07:45.143580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.653 [2024-11-20 09:07:45.143611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.653 [2024-11-20 09:07:45.143640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.653 [2024-11-20 09:07:45.143669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.653 [2024-11-20 09:07:45.143698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.653 [2024-11-20 09:07:45.143727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.653 [2024-11-20 09:07:45.143756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.653 [2024-11-20 09:07:45.143785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.653 [2024-11-20 09:07:45.143814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.653 [2024-11-20 09:07:45.143843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.653 [2024-11-20 09:07:45.143872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.653 [2024-11-20 09:07:45.143902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.653 [2024-11-20 09:07:45.143930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.653 [2024-11-20 09:07:45.143967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:45.653 [2024-11-20 09:07:45.143985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.654 [2024-11-20 09:07:45.143996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.144014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.654 [2024-11-20 09:07:45.144025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.144043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.654 [2024-11-20 09:07:45.144054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.144072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.654 [2024-11-20 09:07:45.144082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.144101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.654 [2024-11-20 09:07:45.144111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.144130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.654 [2024-11-20 09:07:45.144140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.144159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.654 [2024-11-20 09:07:45.144169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.144188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.654 [2024-11-20 09:07:45.144198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.144217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.654 [2024-11-20 09:07:45.144227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.144246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.654 [2024-11-20 09:07:45.144257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.144275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.654 [2024-11-20 09:07:45.144285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.144304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.654 [2024-11-20 09:07:45.144315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.144335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.654 [2024-11-20 09:07:45.144345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.144365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.654 [2024-11-20 09:07:45.144375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.144394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.654 [2024-11-20 09:07:45.144404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.144423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.654 [2024-11-20 09:07:45.144433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.144452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.654 [2024-11-20 09:07:45.144462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.144481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.654 [2024-11-20 09:07:45.144491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.144510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:121424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.654 [2024-11-20 09:07:45.144520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.144539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:121432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.654 [2024-11-20 09:07:45.144549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.144568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.654 [2024-11-20 09:07:45.144578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.144596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:121448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.654 [2024-11-20 09:07:45.144606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.144626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.654 [2024-11-20 09:07:45.144636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.145500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.654 [2024-11-20 09:07:45.145520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.145544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.654 [2024-11-20 09:07:45.145555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.145574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.654 [2024-11-20 09:07:45.145584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.145603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.654 [2024-11-20 09:07:45.145614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.145632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.654 [2024-11-20 09:07:45.145643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.145662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.654 [2024-11-20 09:07:45.145672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.145691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.654 [2024-11-20 09:07:45.145701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.145720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.654 [2024-11-20 09:07:45.145731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.145750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:120808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.654 [2024-11-20 09:07:45.145760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.145779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.654 [2024-11-20 09:07:45.145789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.145808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.654 [2024-11-20 09:07:45.145818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.145837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.654 [2024-11-20 09:07:45.145847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.145866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.654 [2024-11-20 09:07:45.145876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.145897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.654 [2024-11-20 09:07:45.145908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.145927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:121528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.654 [2024-11-20 09:07:45.145937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:45.654 [2024-11-20 09:07:45.145963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.655 [2024-11-20 09:07:45.145974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.145992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:45.655 [2024-11-20 09:07:45.146791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.655 [2024-11-20 09:07:45.146798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:121208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:121248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:121304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:45.656 [2024-11-20 09:07:45.147926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.656 [2024-11-20 09:07:45.147933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.147946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.657 [2024-11-20 09:07:45.147958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.147970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.657 [2024-11-20 09:07:45.147977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.147990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.657 [2024-11-20 09:07:45.147997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.657 [2024-11-20 09:07:45.148017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.657 [2024-11-20 09:07:45.148037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.657 [2024-11-20 09:07:45.148058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.657 [2024-11-20 09:07:45.148077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.657 [2024-11-20 09:07:45.148097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.657 [2024-11-20 09:07:45.148118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.657 [2024-11-20 09:07:45.148137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.657 [2024-11-20 09:07:45.148157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.657 [2024-11-20 09:07:45.148177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.657 [2024-11-20 09:07:45.148196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.657 [2024-11-20 09:07:45.148215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.657 [2024-11-20 09:07:45.148234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.657 [2024-11-20 09:07:45.148254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.657 [2024-11-20 09:07:45.148273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.657 [2024-11-20 09:07:45.148292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.657 [2024-11-20 09:07:45.148312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.657 [2024-11-20 09:07:45.148332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.657 [2024-11-20 09:07:45.148352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.657 [2024-11-20 09:07:45.148373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.657 [2024-11-20 09:07:45.148393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.657 [2024-11-20 09:07:45.148415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:120688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.657 [2024-11-20 09:07:45.148436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.657 [2024-11-20 09:07:45.148456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.657 [2024-11-20 09:07:45.148475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.657 [2024-11-20 09:07:45.148494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:120720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.657 [2024-11-20 09:07:45.148513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:120728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.657 [2024-11-20 09:07:45.148533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.657 [2024-11-20 09:07:45.148553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:120744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.657 [2024-11-20 09:07:45.148573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.657 [2024-11-20 09:07:45.148593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.657 [2024-11-20 09:07:45.148613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.657 [2024-11-20 09:07:45.148632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.148646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.657 [2024-11-20 09:07:45.148653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.149201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.657 [2024-11-20 09:07:45.149214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.149228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.657 [2024-11-20 09:07:45.149236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:45.657 [2024-11-20 09:07:45.149248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:121448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.149256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.149275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.149295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.149314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:121480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.149334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:120768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.658 [2024-11-20 09:07:45.149353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:120776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.658 [2024-11-20 09:07:45.149373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.658 [2024-11-20 09:07:45.149392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.658 [2024-11-20 09:07:45.149412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.658 [2024-11-20 09:07:45.149433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:120808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.658 [2024-11-20 09:07:45.149453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.149472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:121496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.149492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:121504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.149512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.149531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.149551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.149570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.658 [2024-11-20 09:07:45.149590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.149609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.149630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.149650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.149671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.149692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.149712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.149731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.149750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.149770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.149789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.149808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.149828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.149847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.149866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.149886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.149906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.149926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.149946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.149971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.149983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.149992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:45.658 [2024-11-20 09:07:45.150006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.658 [2024-11-20 09:07:45.150013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.150025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.150033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.150047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.150054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.150066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.150073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.150086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.150093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.150106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.150113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.150125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.150132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.150144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.150151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.150165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.150172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.150185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.150192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.150204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.150211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.150223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.150230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.150242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.150249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.150261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.150269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.150281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.150288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.150300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.150307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.150320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.150327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.150863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.150877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.150893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.150900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.150914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.150922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.150938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.150945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.150964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.150971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.150983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.150990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.151003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.151010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.151022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.151029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.151042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.151048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.151061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.151068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.151080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.151087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.151099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.151106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.151119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.151126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.151138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.151145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.151157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.151164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.151177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.151185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.151197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.151204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.151217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:121248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.151224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.151236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.151243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.151256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.151263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.151276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.151283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.151296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.151303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:45.659 [2024-11-20 09:07:45.151315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.659 [2024-11-20 09:07:45.151324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.151336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.660 [2024-11-20 09:07:45.151343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.151356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:121304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.660 [2024-11-20 09:07:45.151363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.151375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.660 [2024-11-20 09:07:45.151382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.151394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.660 [2024-11-20 09:07:45.151401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.151413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.660 [2024-11-20 09:07:45.151422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.151435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.660 [2024-11-20 09:07:45.151442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.151454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.660 [2024-11-20 09:07:45.151461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.151474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.660 [2024-11-20 09:07:45.151480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.151493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.660 [2024-11-20 09:07:45.151500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.151512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.660 [2024-11-20 09:07:45.151519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.151532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.660 [2024-11-20 09:07:45.151539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.151840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.660 [2024-11-20 09:07:45.151851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.151864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.660 [2024-11-20 09:07:45.151872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.151884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.660 [2024-11-20 09:07:45.151891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.151904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.660 [2024-11-20 09:07:45.151911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.151923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.660 [2024-11-20 09:07:45.151930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.151943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.660 [2024-11-20 09:07:45.151955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.151971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.660 [2024-11-20 09:07:45.151979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.151991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.660 [2024-11-20 09:07:45.151998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.152010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.660 [2024-11-20 09:07:45.152017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.152030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.660 [2024-11-20 09:07:45.152037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.152051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.660 [2024-11-20 09:07:45.152058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.152071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.660 [2024-11-20 09:07:45.152077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.152090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.660 [2024-11-20 09:07:45.152097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.152110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.660 [2024-11-20 09:07:45.152117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.152129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.660 [2024-11-20 09:07:45.152136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.152149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.660 [2024-11-20 09:07:45.152156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.152168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.660 [2024-11-20 09:07:45.152175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.152187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.660 [2024-11-20 09:07:45.152194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.152209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.660 [2024-11-20 09:07:45.152216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.152228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.660 [2024-11-20 09:07:45.152235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.152248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.660 [2024-11-20 09:07:45.152255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.152268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.660 [2024-11-20 09:07:45.152277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.152289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.660 [2024-11-20 09:07:45.152296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.152309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.660 [2024-11-20 09:07:45.152316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.152330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.660 [2024-11-20 09:07:45.152337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.152349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.660 [2024-11-20 09:07:45.152356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.152369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.660 [2024-11-20 09:07:45.152376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:45.660 [2024-11-20 09:07:45.152388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.661 [2024-11-20 09:07:45.152395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.152408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.661 [2024-11-20 09:07:45.152415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.152427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.661 [2024-11-20 09:07:45.152435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.152448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.661 [2024-11-20 09:07:45.152455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.152468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.661 [2024-11-20 09:07:45.152475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.152487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.661 [2024-11-20 09:07:45.152494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.152507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.661 [2024-11-20 09:07:45.152514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.152526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.661 [2024-11-20 09:07:45.152533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.152546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.661 [2024-11-20 09:07:45.152552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.152565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:121424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.661 [2024-11-20 09:07:45.152572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.152584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.661 [2024-11-20 09:07:45.152591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.152604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.661 [2024-11-20 09:07:45.152611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.152623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.661 [2024-11-20 09:07:45.152630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.152643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.661 [2024-11-20 09:07:45.152650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.152662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.661 [2024-11-20 09:07:45.152669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.152681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:121472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.661 [2024-11-20 09:07:45.152690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.152702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.661 [2024-11-20 09:07:45.152709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.152722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:120768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.661 [2024-11-20 09:07:45.152729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.152741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.661 [2024-11-20 09:07:45.152748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.152761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.661 [2024-11-20 09:07:45.152768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.152780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.661 [2024-11-20 09:07:45.152787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.152799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.661 [2024-11-20 09:07:45.152806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.152819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.661 [2024-11-20 09:07:45.152826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.152839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:121488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.661 [2024-11-20 09:07:45.152847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.152859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:121496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.661 [2024-11-20 09:07:45.152866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.152879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.661 [2024-11-20 09:07:45.152887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.152899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.661 [2024-11-20 09:07:45.152906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.152918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.661 [2024-11-20 09:07:45.152927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.152940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.661 [2024-11-20 09:07:45.152953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.152966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.661 [2024-11-20 09:07:45.152973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.152986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.661 [2024-11-20 09:07:45.152993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.153435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.661 [2024-11-20 09:07:45.153446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.153460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.661 [2024-11-20 09:07:45.153467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.153480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.661 [2024-11-20 09:07:45.153487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:45.661 [2024-11-20 09:07:45.153499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.153506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.153518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.153526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.153538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.153545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.153558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.153564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.153577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.153584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.153596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.153603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.153618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.153625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.153638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.153645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.153658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.153667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.153679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.153687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.153699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.153707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.153720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.153729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.153741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.153749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.153762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.153769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.153782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.153789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.153801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.153808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.153822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.153829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.153841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.153848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.153864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.153872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.153885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.153892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.153904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.153911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.153924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.153931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.153943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.153957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.153969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.153976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.153989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.153996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.154008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.154015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.154028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.154035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.154047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.154055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.154068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.154075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.154089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.154096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.154110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.154117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.154129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.154136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.154149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.154156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.154168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.154175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.154187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.154194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.154207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.154214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.154226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.154234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.154246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.154253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:45.662 [2024-11-20 09:07:45.154265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.662 [2024-11-20 09:07:45.154272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.154284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.154291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.154304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.154312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.154325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.154333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.154346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.154355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.154368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.154375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.154388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.154397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.154805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:121208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.154817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.154831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.154838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.154851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.154858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.154871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.154879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.154893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.154901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.154914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:121248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.154921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.154935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.154942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.154960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.154968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.154981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.154988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.155001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.155010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.155022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.155030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.155042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.155050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.155062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.155069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.155082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.155090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.155103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.155110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.155123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.155130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.155142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.155149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.155162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.155169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.155181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.155188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.155200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.155207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.155219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.155226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.155239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.155246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.155260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.155267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.155280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.155287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.155529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.155540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.155554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.663 [2024-11-20 09:07:45.155562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.155574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.663 [2024-11-20 09:07:45.155582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.155594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.663 [2024-11-20 09:07:45.155602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.155614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.663 [2024-11-20 09:07:45.155621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.155633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.663 [2024-11-20 09:07:45.155640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.155653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.663 [2024-11-20 09:07:45.155660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.155673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.663 [2024-11-20 09:07:45.155680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:45.663 [2024-11-20 09:07:45.155692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.663 [2024-11-20 09:07:45.155699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.155712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.664 [2024-11-20 09:07:45.155719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.155734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.664 [2024-11-20 09:07:45.155741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.155753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.664 [2024-11-20 09:07:45.155760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.155773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.664 [2024-11-20 09:07:45.155780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.155792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.664 [2024-11-20 09:07:45.155799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.155812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.664 [2024-11-20 09:07:45.155819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.155831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.664 [2024-11-20 09:07:45.155838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.155851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.664 [2024-11-20 09:07:45.155858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.155870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.664 [2024-11-20 09:07:45.155877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.155890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.664 [2024-11-20 09:07:45.155897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.155909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.664 [2024-11-20 09:07:45.155917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.155929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.664 [2024-11-20 09:07:45.155936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.155953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.664 [2024-11-20 09:07:45.155961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.155975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.664 [2024-11-20 09:07:45.155982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.155994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.664 [2024-11-20 09:07:45.156002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.156014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.664 [2024-11-20 09:07:45.156021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.156034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.664 [2024-11-20 09:07:45.156041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.156053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.664 [2024-11-20 09:07:45.156060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.156073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.664 [2024-11-20 09:07:45.156079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.156092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.664 [2024-11-20 09:07:45.156100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.156112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.664 [2024-11-20 09:07:45.156119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.156132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.664 [2024-11-20 09:07:45.156139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.156151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.664 [2024-11-20 09:07:45.156158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.156170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.664 [2024-11-20 09:07:45.156177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.156190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.664 [2024-11-20 09:07:45.156197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.156209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:121424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.664 [2024-11-20 09:07:45.156218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.156231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:121432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.664 [2024-11-20 09:07:45.156238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.156250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.664 [2024-11-20 09:07:45.156257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.156270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:121448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.664 [2024-11-20 09:07:45.156276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.156289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.664 [2024-11-20 09:07:45.156296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.156308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.664 [2024-11-20 09:07:45.156315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.156327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:121472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.664 [2024-11-20 09:07:45.156335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.156347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:121480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.664 [2024-11-20 09:07:45.156354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.156366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.664 [2024-11-20 09:07:45.156373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.156386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.664 [2024-11-20 09:07:45.156393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.156405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.664 [2024-11-20 09:07:45.156412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.156425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.664 [2024-11-20 09:07:45.156432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.156444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.664 [2024-11-20 09:07:45.156453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:45.664 [2024-11-20 09:07:45.156465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.665 [2024-11-20 09:07:45.156472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.156485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.156492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.156504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.156511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.156524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.156531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.156544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.156550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.156563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.156570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.156980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:121528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.156991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.157005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.665 [2024-11-20 09:07:45.157012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.157025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.157032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.157044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.157051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.157064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.157071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.157084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.157091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.157108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.157115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.157128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.157135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.157148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.157155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.157167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.157174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.157187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.157194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.157206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.157213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.157225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.157233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.157245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.157252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.157265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.157271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.157284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.157291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.157303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.157311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.157323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.157330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.157344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.157351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.157363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.157370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.157383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.157389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.157402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.157409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.157421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.157428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.157441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.157448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.157460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.157467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.157480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.157487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.157499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.157506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.157519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.665 [2024-11-20 09:07:45.157526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:45.665 [2024-11-20 09:07:45.157538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.157545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.157557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.157564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.157579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.157585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.157598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.157605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.157617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.157625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.157637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.157644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.157657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.157663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.157676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.157683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.157695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.157702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.157715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.157722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.157734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.157741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.157754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.157761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.157774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.157781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.158146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.158158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.158171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.158181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.158193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.158200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.158213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.158220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.158232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.158239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.158252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.158259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.158271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.158278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.158291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.158298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.158311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.158318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.158330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.158337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.158350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:121208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.158357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.158369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.158376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.158389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.158395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.158408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.158417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.158429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.158436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.158449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.158455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.158468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.158475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.158487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.158494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.158507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.158514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.158526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.158534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.158546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.158553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.158566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.158574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.158586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.158593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.158606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.158613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.158626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.158633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:45.666 [2024-11-20 09:07:45.158645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.666 [2024-11-20 09:07:45.158652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.158666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.667 [2024-11-20 09:07:45.158673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.158685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.667 [2024-11-20 09:07:45.158692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.158705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.667 [2024-11-20 09:07:45.158712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.158724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.667 [2024-11-20 09:07:45.158731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.158744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.667 [2024-11-20 09:07:45.158751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.158764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.667 [2024-11-20 09:07:45.158771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.667 [2024-11-20 09:07:45.159078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.667 [2024-11-20 09:07:45.159099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.667 [2024-11-20 09:07:45.159119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.667 [2024-11-20 09:07:45.159138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.667 [2024-11-20 09:07:45.159158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.667 [2024-11-20 09:07:45.159178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.667 [2024-11-20 09:07:45.159200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.667 [2024-11-20 09:07:45.159219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.667 [2024-11-20 09:07:45.159238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.667 [2024-11-20 09:07:45.159258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.667 [2024-11-20 09:07:45.159278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.667 [2024-11-20 09:07:45.159297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.667 [2024-11-20 09:07:45.159316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.667 [2024-11-20 09:07:45.159336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.667 [2024-11-20 09:07:45.159356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.667 [2024-11-20 09:07:45.159375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.667 [2024-11-20 09:07:45.159394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.667 [2024-11-20 09:07:45.159414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.667 [2024-11-20 09:07:45.159435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.667 [2024-11-20 09:07:45.159455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.667 [2024-11-20 09:07:45.159474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.667 [2024-11-20 09:07:45.159494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.667 [2024-11-20 09:07:45.159513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.667 [2024-11-20 09:07:45.159533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.667 [2024-11-20 09:07:45.159552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.667 [2024-11-20 09:07:45.159572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.667 [2024-11-20 09:07:45.159591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.667 [2024-11-20 09:07:45.159610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.667 [2024-11-20 09:07:45.159630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.667 [2024-11-20 09:07:45.159649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.667 [2024-11-20 09:07:45.159670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.667 [2024-11-20 09:07:45.159690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:45.667 [2024-11-20 09:07:45.159702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.668 [2024-11-20 09:07:45.159709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.159722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.668 [2024-11-20 09:07:45.159729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.159741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.668 [2024-11-20 09:07:45.159748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.159761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.668 [2024-11-20 09:07:45.159768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.159780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.668 [2024-11-20 09:07:45.159787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.159800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.668 [2024-11-20 09:07:45.159807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.159819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.668 [2024-11-20 09:07:45.159826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.160132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:121448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.668 [2024-11-20 09:07:45.160143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.160156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.668 [2024-11-20 09:07:45.160164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.160176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.668 [2024-11-20 09:07:45.160183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.160196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:121472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.668 [2024-11-20 09:07:45.160205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.160217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:121480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.668 [2024-11-20 09:07:45.160224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.160237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:120768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.668 [2024-11-20 09:07:45.160243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.160256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:120776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.668 [2024-11-20 09:07:45.160263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.160276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.668 [2024-11-20 09:07:45.160283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.160295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.668 [2024-11-20 09:07:45.160302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.160314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.668 [2024-11-20 09:07:45.160321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.160334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:120808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.668 [2024-11-20 09:07:45.160341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.160354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:121488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.668 [2024-11-20 09:07:45.160360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.160373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.668 [2024-11-20 09:07:45.160380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.160392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:121504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.668 [2024-11-20 09:07:45.160400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.160413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.668 [2024-11-20 09:07:45.160419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.160432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.668 [2024-11-20 09:07:45.160439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.160453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:121528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.668 [2024-11-20 09:07:45.160460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.160473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.668 [2024-11-20 09:07:45.160480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.160492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.668 [2024-11-20 09:07:45.160499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.160511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.668 [2024-11-20 09:07:45.160518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.160530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.668 [2024-11-20 09:07:45.160537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.160550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.668 [2024-11-20 09:07:45.160557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.160569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.668 [2024-11-20 09:07:45.160576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.160588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.668 [2024-11-20 09:07:45.160595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.160608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.668 [2024-11-20 09:07:45.160615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.160627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.668 [2024-11-20 09:07:45.160634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.160646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.668 [2024-11-20 09:07:45.160653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.160666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.668 [2024-11-20 09:07:45.160673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.160687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.668 [2024-11-20 09:07:45.160694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.160706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.668 [2024-11-20 09:07:45.160714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.160727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.668 [2024-11-20 09:07:45.160734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.161001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.668 [2024-11-20 09:07:45.161011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:45.668 [2024-11-20 09:07:45.161025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.161992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.161999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.162012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.162019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.162031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.162041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:45.669 [2024-11-20 09:07:45.162053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.669 [2024-11-20 09:07:45.162060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.670 [2024-11-20 09:07:45.162080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.670 [2024-11-20 09:07:45.162099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.670 [2024-11-20 09:07:45.162119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.670 [2024-11-20 09:07:45.162138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.670 [2024-11-20 09:07:45.162157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.670 [2024-11-20 09:07:45.162177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.670 [2024-11-20 09:07:45.162200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.670 [2024-11-20 09:07:45.162220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.670 [2024-11-20 09:07:45.162240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.670 [2024-11-20 09:07:45.162259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.670 [2024-11-20 09:07:45.162278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.670 [2024-11-20 09:07:45.162298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.670 [2024-11-20 09:07:45.162318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.670 [2024-11-20 09:07:45.162337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.670 [2024-11-20 09:07:45.162357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.670 [2024-11-20 09:07:45.162377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.670 [2024-11-20 09:07:45.162608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.670 [2024-11-20 09:07:45.162629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.670 [2024-11-20 09:07:45.162651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.670 [2024-11-20 09:07:45.162671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.670 [2024-11-20 09:07:45.162690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.670 [2024-11-20 09:07:45.162710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.670 [2024-11-20 09:07:45.162731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.670 [2024-11-20 09:07:45.162751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.670 [2024-11-20 09:07:45.162771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.670 [2024-11-20 09:07:45.162790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.670 [2024-11-20 09:07:45.162810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.670 [2024-11-20 09:07:45.162829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.670 [2024-11-20 09:07:45.162849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.670 [2024-11-20 09:07:45.162869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.670 [2024-11-20 09:07:45.162890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.670 [2024-11-20 09:07:45.162911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.670 [2024-11-20 09:07:45.162931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:45.670 [2024-11-20 09:07:45.162943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.670 [2024-11-20 09:07:45.162957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.162969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.671 [2024-11-20 09:07:45.162976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.162989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.671 [2024-11-20 09:07:45.162996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.671 [2024-11-20 09:07:45.163016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.671 [2024-11-20 09:07:45.163036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.671 [2024-11-20 09:07:45.163057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.671 [2024-11-20 09:07:45.163077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.671 [2024-11-20 09:07:45.163097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.671 [2024-11-20 09:07:45.163117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.671 [2024-11-20 09:07:45.163138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.671 [2024-11-20 09:07:45.163158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.671 [2024-11-20 09:07:45.163178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.671 [2024-11-20 09:07:45.163198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.671 [2024-11-20 09:07:45.163217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.671 [2024-11-20 09:07:45.163237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:120728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.671 [2024-11-20 09:07:45.163257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.671 [2024-11-20 09:07:45.163277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:120744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.671 [2024-11-20 09:07:45.163296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.671 [2024-11-20 09:07:45.163316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.671 [2024-11-20 09:07:45.163335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.671 [2024-11-20 09:07:45.163355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.671 [2024-11-20 09:07:45.163376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:121432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.671 [2024-11-20 09:07:45.163676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.671 [2024-11-20 09:07:45.163697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:121448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.671 [2024-11-20 09:07:45.163716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.671 [2024-11-20 09:07:45.163736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.671 [2024-11-20 09:07:45.163757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:121472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.671 [2024-11-20 09:07:45.163776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:121480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.671 [2024-11-20 09:07:45.163796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:120768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.671 [2024-11-20 09:07:45.163815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:120776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.671 [2024-11-20 09:07:45.163835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.671 [2024-11-20 09:07:45.163854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.671 [2024-11-20 09:07:45.163874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.671 [2024-11-20 09:07:45.163894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.671 [2024-11-20 09:07:45.163915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:121488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.671 [2024-11-20 09:07:45.163934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:121496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.671 [2024-11-20 09:07:45.163959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:45.671 [2024-11-20 09:07:45.163972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:121504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.671 [2024-11-20 09:07:45.163979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.163991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.163998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.164011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.164018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.164031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:121528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.164038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.164050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.672 [2024-11-20 09:07:45.164057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.164070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.164077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.164090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.164097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.164110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.164117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.164327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.164337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.164353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.164360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.164372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.164379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.164392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.164399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.164411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.164418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.164430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.164437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.164450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.164457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.164470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.164476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.164489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.164496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.164509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.164516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.164528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.164535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.164547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.164554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.164567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.164574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.164586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.164595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.164608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.164615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.164627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.164634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.164646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.164653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.164666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.164673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.164685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.164692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.164705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.164711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.164724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.164731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.164743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.164750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.164763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.164770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.164783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.164790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.165015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:121032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.165026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.165039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:121040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.165048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.165061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:121048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.165068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.165081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:121056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.165088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.165100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:121064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.165107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.165120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:121072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.165127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.165139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:121080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.672 [2024-11-20 09:07:45.165146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:45.672 [2024-11-20 09:07:45.165159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:121088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:121096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:121104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:121112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:121120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:121128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:121136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:121144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:121152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:121160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:121168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:121176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:121184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:121192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:121200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:121208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:121216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:121224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:121232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:121240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:121248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:121256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:121264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:121272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:121280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:121296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:121304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:121320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:121328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:121336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.165916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:121344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.165924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.166426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:121352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.166435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.166450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:121360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.166457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.166472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:121368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.166479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.166494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:121376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.166501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.166516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:121384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.673 [2024-11-20 09:07:45.166523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:23:45.673 [2024-11-20 09:07:45.166537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.674 [2024-11-20 09:07:45.166545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.166560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:121400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.674 [2024-11-20 09:07:45.166566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.166581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:121408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.674 [2024-11-20 09:07:45.166588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.166602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:120520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.674 [2024-11-20 09:07:45.166609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.166624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:120528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.674 [2024-11-20 09:07:45.166631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.166646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:120536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.674 [2024-11-20 09:07:45.166655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.166670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:120544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.674 [2024-11-20 09:07:45.166677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.166691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:120552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.674 [2024-11-20 09:07:45.166698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.166713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:120560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.674 [2024-11-20 09:07:45.166720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.166734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:120568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.674 [2024-11-20 09:07:45.166741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.166756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:120576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.674 [2024-11-20 09:07:45.166763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.166778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:120584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.674 [2024-11-20 09:07:45.166785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.166837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:120592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.674 [2024-11-20 09:07:45.166845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.166861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:120600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.674 [2024-11-20 09:07:45.166868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.166884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:120608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.674 [2024-11-20 09:07:45.166891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.166907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:120616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.674 [2024-11-20 09:07:45.166914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.166929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:120624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.674 [2024-11-20 09:07:45.166936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.166954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:120632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.674 [2024-11-20 09:07:45.166964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.166979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:120640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.674 [2024-11-20 09:07:45.166986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.167002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:120648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.674 [2024-11-20 09:07:45.167009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.167025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:120656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.674 [2024-11-20 09:07:45.167032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.167047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:120664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.674 [2024-11-20 09:07:45.167054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.167070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:120672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.674 [2024-11-20 09:07:45.167077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.167092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:120680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.674 [2024-11-20 09:07:45.167099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.167115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:120688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.674 [2024-11-20 09:07:45.167122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.167137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:120696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.674 [2024-11-20 09:07:45.167144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.167159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:120704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.674 [2024-11-20 09:07:45.167167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.167182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:120712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.674 [2024-11-20 09:07:45.167189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.167205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:120720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.674 [2024-11-20 09:07:45.167212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.167227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:120728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.674 [2024-11-20 09:07:45.167234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.167251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:120736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.674 [2024-11-20 09:07:45.167258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.167274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:120744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.674 [2024-11-20 09:07:45.167281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.167296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:120752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.674 [2024-11-20 09:07:45.167303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.167319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:120760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.674 [2024-11-20 09:07:45.167326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.167385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:121416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.674 [2024-11-20 09:07:45.167394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.167411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:121424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.674 [2024-11-20 09:07:45.167418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:45.674 [2024-11-20 09:07:45.167435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:121432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.167442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.167459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:121440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.167466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.167482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:121448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.167489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.167506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:121456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.167513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.167529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:121464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.167536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.167552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:121472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.167559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.167579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:121480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.167586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.167603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:120768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.675 [2024-11-20 09:07:45.167610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.167626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:120776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.675 [2024-11-20 09:07:45.167633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.167650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:120784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.675 [2024-11-20 09:07:45.167657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.167673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:120792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.675 [2024-11-20 09:07:45.167680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.167697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:120800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.675 [2024-11-20 09:07:45.167704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.167720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:120808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.675 [2024-11-20 09:07:45.167728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.167744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:121488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.167751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.167768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:121496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.167775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.167791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:121504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.167798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.167815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:121512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.167822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.167838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:121520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.167845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.167863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:121528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.167870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.167887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:120512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.675 [2024-11-20 09:07:45.167894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.167910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:120816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.167917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.167982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:120824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.167991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.168009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:120832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.168017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.168034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:120840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.168041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.168059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:120848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.168067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.168085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:120856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.168091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.168109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:120864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.168116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.168133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:120872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.168141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.168158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:120880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.168165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.168183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:120888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.168190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.168207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:120896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.168216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.168234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:120904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.168241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.168259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:120912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.168266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.168284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:120920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.168291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.168309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:120928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.168316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.168333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:120936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.168340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.168358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:120944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.168365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.168429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:120952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.168438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.168457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:120960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.168465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.168483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:120968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.168490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.168508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:120976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.675 [2024-11-20 09:07:45.168516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:45.675 [2024-11-20 09:07:45.168534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:120984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.676 [2024-11-20 09:07:45.168541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.676 [2024-11-20 09:07:45.168559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:120992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.676 [2024-11-20 09:07:45.168569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.676 [2024-11-20 09:07:45.168587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:121000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.676 [2024-11-20 09:07:45.168595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:45.676 [2024-11-20 09:07:45.168614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:121008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.676 [2024-11-20 09:07:45.168621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:45.676 [2024-11-20 09:07:45.168639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:121016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.676 [2024-11-20 09:07:45.168646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:45.676 [2024-11-20 09:07:45.168665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:121024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.676 [2024-11-20 09:07:45.168672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:45.676 11241.23 IOPS, 43.91 MiB/s [2024-11-20T08:08:01.717Z] 10438.29 IOPS, 40.77 MiB/s [2024-11-20T08:08:01.717Z] 9742.40 IOPS, 38.06 MiB/s [2024-11-20T08:08:01.717Z] 9146.31 IOPS, 35.73 MiB/s [2024-11-20T08:08:01.717Z] 9262.18 IOPS, 36.18 MiB/s [2024-11-20T08:08:01.717Z] 9364.94 IOPS, 36.58 MiB/s [2024-11-20T08:08:01.717Z] 9517.05 IOPS, 37.18 MiB/s [2024-11-20T08:08:01.717Z] 9704.60 IOPS, 37.91 MiB/s [2024-11-20T08:08:01.717Z] 9878.81 IOPS, 38.59 MiB/s [2024-11-20T08:08:01.717Z] 9955.73 IOPS, 38.89 MiB/s [2024-11-20T08:08:01.717Z] 10005.30 IOPS, 39.08 MiB/s [2024-11-20T08:08:01.717Z] 10046.46 IOPS, 39.24 MiB/s [2024-11-20T08:08:01.717Z] 10165.92 IOPS, 39.71 MiB/s [2024-11-20T08:08:01.717Z] 10279.73 IOPS, 40.16 MiB/s [2024-11-20T08:08:01.717Z] [2024-11-20 09:07:58.892535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:111720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.676 [2024-11-20 09:07:58.892575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:23:45.676 [2024-11-20 09:07:58.892608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:111736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.676 [2024-11-20 09:07:58.892617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:23:45.676 [2024-11-20 09:07:58.892630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:111752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.676 [2024-11-20 09:07:58.892638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:23:45.676 [2024-11-20 09:07:58.892651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:111768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.676 [2024-11-20 09:07:58.892658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:23:45.676 [2024-11-20 09:07:58.892671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:111784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.676 [2024-11-20 09:07:58.892679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:23:45.676 [2024-11-20 09:07:58.892692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:111800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.676 [2024-11-20 09:07:58.892699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:23:45.676 [2024-11-20 09:07:58.892711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:111816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.676 [2024-11-20 09:07:58.892723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:23:45.676 [2024-11-20 09:07:58.892736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:111832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.676 [2024-11-20 09:07:58.892743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:23:45.676 [2024-11-20 09:07:58.892755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:111848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.676 [2024-11-20 09:07:58.892763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:23:45.676 [2024-11-20 09:07:58.892775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:111864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.676 [2024-11-20 09:07:58.892782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:23:45.676 [2024-11-20 09:07:58.892795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:111880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.676 [2024-11-20 09:07:58.892802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:23:45.676 [2024-11-20 09:07:58.892814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:111896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.676 [2024-11-20 09:07:58.892822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:23:45.676 [2024-11-20 09:07:58.892834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:111912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.676 [2024-11-20 09:07:58.892841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:23:45.676 [2024-11-20 09:07:58.892853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:111928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.676 [2024-11-20 09:07:58.892860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:23:45.676 [2024-11-20 09:07:58.892872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:111944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.676 [2024-11-20 09:07:58.892880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:23:45.676 [2024-11-20 09:07:58.892892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:111960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.676 [2024-11-20 09:07:58.892899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:23:45.676 [2024-11-20 09:07:58.892913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:111976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.676 [2024-11-20 09:07:58.892920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:23:45.676 [2024-11-20 09:07:58.892933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:111992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.676 [2024-11-20 09:07:58.892941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:23:45.676 [2024-11-20 09:07:58.892959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:112008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.892969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.892981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:112024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.892988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:112040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:112056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:112072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:112088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:112104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:112120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:112136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:112144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:112160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:112176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:112192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:112208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:112224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:112240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:112256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:112272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:112288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:112304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:112320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:112336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:112352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:112368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:112384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:112400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:112416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:112432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:112448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:112464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:112480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:112496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:112512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:112528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:112544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:112560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:111648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.677 [2024-11-20 09:07:58.893941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:112576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:23:45.677 [2024-11-20 09:07:58.893981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:112592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.677 [2024-11-20 09:07:58.893988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:23:45.678 [2024-11-20 09:07:58.894000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:112608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.678 [2024-11-20 09:07:58.894008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:23:45.678 [2024-11-20 09:07:58.894020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:111672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.678 [2024-11-20 09:07:58.894028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:23:45.678 [2024-11-20 09:07:58.894041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:112624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.678 [2024-11-20 09:07:58.894048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:23:45.678 [2024-11-20 09:07:58.894367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:112640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.678 [2024-11-20 09:07:58.894379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:23:45.678 [2024-11-20 09:07:58.894394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:111680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.678 [2024-11-20 09:07:58.894401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:23:45.678 [2024-11-20 09:07:58.894414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:111712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.678 [2024-11-20 09:07:58.894421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:23:45.678 [2024-11-20 09:07:58.894434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:111744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.678 [2024-11-20 09:07:58.894441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:23:45.678 [2024-11-20 09:07:58.894454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:111776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.678 [2024-11-20 09:07:58.894461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:23:45.678 [2024-11-20 09:07:58.894474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:111808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.678 [2024-11-20 09:07:58.894481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:23:45.678 [2024-11-20 09:07:58.894493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:111840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.678 [2024-11-20 09:07:58.894500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:23:45.678 [2024-11-20 09:07:58.894513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:111872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.678 [2024-11-20 09:07:58.894522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:23:45.678 [2024-11-20 09:07:58.894535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:111904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.678 [2024-11-20 09:07:58.894542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:23:45.678 [2024-11-20 09:07:58.894555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:111936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.678 [2024-11-20 09:07:58.894562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:23:45.678 [2024-11-20 09:07:58.894574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:111968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.678 [2024-11-20 09:07:58.894581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:23:45.678 [2024-11-20 09:07:58.894593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:112000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.678 [2024-11-20 09:07:58.894601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:23:45.678 [2024-11-20 09:07:58.894613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:112032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.678 [2024-11-20 09:07:58.894620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:23:45.678 [2024-11-20 09:07:58.894632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:112064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.678 [2024-11-20 09:07:58.894639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:23:45.678 [2024-11-20 09:07:58.894652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:112096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.678 [2024-11-20 09:07:58.894659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:23:45.678 [2024-11-20 09:07:58.894672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:112128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.678 [2024-11-20 09:07:58.894679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:23:45.678 [2024-11-20 09:07:58.894692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:111688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:45.678 [2024-11-20 09:07:58.894698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:23:45.678 [2024-11-20 09:07:58.894712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:112664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:45.678 [2024-11-20 09:07:58.894719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:23:45.678 10360.93 IOPS, 40.47 MiB/s [2024-11-20T08:08:01.719Z] 10390.86 IOPS, 40.59 MiB/s [2024-11-20T08:08:01.719Z] 10415.38 IOPS, 40.69 MiB/s [2024-11-20T08:08:01.719Z] Received shutdown signal, test time was about 29.047556 seconds 00:23:45.678 00:23:45.678 Latency(us) 00:23:45.678 [2024-11-20T08:08:01.719Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:45.678 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:23:45.678 Verification LBA range: start 0x0 length 0x4000 00:23:45.678 Nvme0n1 : 29.05 10412.64 40.67 0.00 0.00 12272.18 416.72 3078254.41 00:23:45.678 [2024-11-20T08:08:01.719Z] =================================================================================================================== 00:23:45.678 [2024-11-20T08:08:01.719Z] Total : 10412.64 40.67 0.00 0.00 12272.18 416.72 3078254.41 00:23:45.678 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:45.678 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:23:45.678 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:45.678 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:23:45.678 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # nvmfcleanup 00:23:45.678 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@99 -- # sync 00:23:45.678 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:23:45.678 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # set +e 00:23:45.678 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # for i in {1..20} 00:23:45.678 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:23:45.678 rmmod nvme_tcp 00:23:45.678 rmmod nvme_fabrics 00:23:45.938 rmmod nvme_keyring 00:23:45.938 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:23:45.938 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # set -e 00:23:45.938 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # return 0 00:23:45.938 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # '[' -n 2437208 ']' 00:23:45.938 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@337 -- # killprocess 2437208 00:23:45.938 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 2437208 ']' 00:23:45.938 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 2437208 00:23:45.938 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:23:45.938 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:45.938 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2437208 00:23:45.938 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:45.938 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:45.938 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2437208' 00:23:45.938 killing process with pid 2437208 00:23:45.938 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 2437208 00:23:45.938 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 2437208 00:23:45.938 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:23:45.938 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # nvmf_fini 00:23:45.938 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@264 -- # local dev 00:23:45.938 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@267 -- # remove_target_ns 00:23:45.938 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:45.938 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:45.938 09:08:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:48.477 09:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@268 -- # delete_main_bridge 00:23:48.477 09:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:23:48.477 09:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@130 -- # return 0 00:23:48.477 09:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:23:48.477 09:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:23:48.477 09:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:23:48.477 09:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:23:48.477 09:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:23:48.477 09:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:23:48.477 09:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:23:48.478 09:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:23:48.478 09:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:23:48.478 09:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:23:48.478 09:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:23:48.478 09:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:23:48.478 09:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:23:48.478 09:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:23:48.478 09:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:23:48.478 09:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:23:48.478 09:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:23:48.478 09:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@41 -- # _dev=0 00:23:48.478 09:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@41 -- # dev_map=() 00:23:48.478 09:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@284 -- # iptr 00:23:48.478 09:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@542 -- # iptables-save 00:23:48.478 09:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:23:48.478 09:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@542 -- # iptables-restore 00:23:48.478 00:23:48.478 real 0m41.034s 00:23:48.478 user 1m51.185s 00:23:48.478 sys 0m11.718s 00:23:48.478 09:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:48.478 09:08:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:23:48.478 ************************************ 00:23:48.478 END TEST nvmf_host_multipath_status 00:23:48.478 ************************************ 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.478 ************************************ 00:23:48.478 START TEST nvmf_discovery_remove_ifc 00:23:48.478 ************************************ 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:23:48.478 * Looking for test storage... 00:23:48.478 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:48.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.478 --rc genhtml_branch_coverage=1 00:23:48.478 --rc genhtml_function_coverage=1 00:23:48.478 --rc genhtml_legend=1 00:23:48.478 --rc geninfo_all_blocks=1 00:23:48.478 --rc geninfo_unexecuted_blocks=1 00:23:48.478 00:23:48.478 ' 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:48.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.478 --rc genhtml_branch_coverage=1 00:23:48.478 --rc genhtml_function_coverage=1 00:23:48.478 --rc genhtml_legend=1 00:23:48.478 --rc geninfo_all_blocks=1 00:23:48.478 --rc geninfo_unexecuted_blocks=1 00:23:48.478 00:23:48.478 ' 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:48.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.478 --rc genhtml_branch_coverage=1 00:23:48.478 --rc genhtml_function_coverage=1 00:23:48.478 --rc genhtml_legend=1 00:23:48.478 --rc geninfo_all_blocks=1 00:23:48.478 --rc geninfo_unexecuted_blocks=1 00:23:48.478 00:23:48.478 ' 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:48.478 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:48.478 --rc genhtml_branch_coverage=1 00:23:48.478 --rc genhtml_function_coverage=1 00:23:48.478 --rc genhtml_legend=1 00:23:48.478 --rc geninfo_all_blocks=1 00:23:48.478 --rc geninfo_unexecuted_blocks=1 00:23:48.478 00:23:48.478 ' 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:48.478 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@50 -- # : 0 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:23:48.479 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@54 -- # have_pci_nics=0 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # discovery_port=8009 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@18 -- # nqn=nqn.2016-06.io.spdk:cnode 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # host_nqn=nqn.2021-12.io.spdk:test 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@21 -- # host_sock=/tmp/host.sock 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # nvmftestinit 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # prepare_net_devs 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # local -g is_hw=no 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # remove_target_ns 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # xtrace_disable 00:23:48.479 09:08:04 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@131 -- # pci_devs=() 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@131 -- # local -a pci_devs 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@132 -- # pci_net_devs=() 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@133 -- # pci_drivers=() 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@133 -- # local -A pci_drivers 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@135 -- # net_devs=() 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@135 -- # local -ga net_devs 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@136 -- # e810=() 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@136 -- # local -ga e810 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@137 -- # x722=() 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@137 -- # local -ga x722 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@138 -- # mlx=() 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@138 -- # local -ga mlx 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:55.051 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:55.051 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:55.051 Found net devices under 0000:86:00.0: cvl_0_0 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:55.051 Found net devices under 0000:86:00.1: cvl_0_1 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # is_hw=yes 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@257 -- # create_target_ns 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@27 -- # local -gA dev_map 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@28 -- # local -g _dev 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:23:55.051 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:23:55.052 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:55.052 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:23:55.052 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # ips=() 00:23:55.052 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:23:55.052 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:23:55.052 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:23:55.052 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:23:55.052 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:23:55.052 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:23:55.052 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:23:55.052 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:23:55.052 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:23:55.052 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:23:55.052 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:23:55.052 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:23:55.052 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:23:55.052 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:23:55.052 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:23:55.052 09:08:09 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772161 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:23:55.052 10.0.0.1 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772162 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:23:55.052 10.0.0.2 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@38 -- # ping_ips 1 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=initiator0 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:23:55.052 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:55.052 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.458 ms 00:23:55.052 00:23:55.052 --- 10.0.0.1 ping statistics --- 00:23:55.052 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.052 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:55.052 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev target0 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=target0 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:23:55.053 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:55.053 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:23:55.053 00:23:55.053 --- 10.0.0.2 ping statistics --- 00:23:55.053 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:55.053 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # (( pair++ )) 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # return 0 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=initiator0 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=initiator1 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # return 1 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev= 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@169 -- # return 0 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev target0 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=target0 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev target1 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=target1 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # return 1 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev= 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@169 -- # return 0 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@35 -- # nvmfappstart -m 0x2 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # nvmfpid=2446245 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # waitforlisten 2446245 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2446245 ']' 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:55.053 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:55.053 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:55.054 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:55.054 [2024-11-20 09:08:10.404473] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:23:55.054 [2024-11-20 09:08:10.404529] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:55.054 [2024-11-20 09:08:10.483075] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.054 [2024-11-20 09:08:10.524938] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:55.054 [2024-11-20 09:08:10.524979] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:55.054 [2024-11-20 09:08:10.524987] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:55.054 [2024-11-20 09:08:10.524993] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:55.054 [2024-11-20 09:08:10.524998] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:55.054 [2024-11-20 09:08:10.525582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:55.054 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:55.054 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:55.054 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:23:55.054 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:55.054 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:55.054 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:55.054 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@38 -- # rpc_cmd 00:23:55.054 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.054 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:55.054 [2024-11-20 09:08:10.669710] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:55.054 [2024-11-20 09:08:10.677902] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:55.054 null0 00:23:55.054 [2024-11-20 09:08:10.709873] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:55.054 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.054 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@54 -- # hostpid=2446268 00:23:55.054 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:23:55.054 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@55 -- # waitforlisten 2446268 /tmp/host.sock 00:23:55.054 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2446268 ']' 00:23:55.054 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:23:55.054 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:55.054 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:55.054 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:55.054 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:55.054 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:55.054 [2024-11-20 09:08:10.780480] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:23:55.054 [2024-11-20 09:08:10.780523] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2446268 ] 00:23:55.054 [2024-11-20 09:08:10.855639] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:55.054 [2024-11-20 09:08:10.898951] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:55.054 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:55.054 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:23:55.054 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@57 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:55.054 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:23:55.054 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.054 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:55.054 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.054 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@61 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:23:55.054 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.054 09:08:10 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:55.054 09:08:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:55.054 09:08:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:23:55.054 09:08:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:55.054 09:08:11 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:56.434 [2024-11-20 09:08:12.076449] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:56.434 [2024-11-20 09:08:12.076469] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:56.434 [2024-11-20 09:08:12.076483] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:56.434 [2024-11-20 09:08:12.203880] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:56.434 [2024-11-20 09:08:12.306654] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:23:56.434 [2024-11-20 09:08:12.307351] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x14469f0:1 started. 00:23:56.434 [2024-11-20 09:08:12.308690] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:23:56.434 [2024-11-20 09:08:12.308730] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:23:56.434 [2024-11-20 09:08:12.308749] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:23:56.434 [2024-11-20 09:08:12.308761] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:56.434 [2024-11-20 09:08:12.308777] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:56.434 09:08:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.434 09:08:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@67 -- # wait_for_bdev nvme0n1 00:23:56.434 09:08:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:23:56.434 09:08:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:56.434 09:08:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:23:56.435 [2024-11-20 09:08:12.315248] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x14469f0 was disconnected and freed. delete nvme_qpair. 00:23:56.435 09:08:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.435 09:08:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:23:56.435 09:08:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:56.435 09:08:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:23:56.435 09:08:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.435 09:08:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:23:56.435 09:08:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@70 -- # ip netns exec nvmf_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_1 00:23:56.435 09:08:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@71 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 down 00:23:56.435 09:08:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@74 -- # wait_for_bdev '' 00:23:56.435 09:08:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:23:56.435 09:08:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:56.435 09:08:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:23:56.435 09:08:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:56.435 09:08:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:23:56.435 09:08:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:56.435 09:08:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:23:56.693 09:08:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:56.693 09:08:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:23:56.693 09:08:12 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:23:57.629 09:08:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:23:57.629 09:08:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:57.629 09:08:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:23:57.629 09:08:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:23:57.629 09:08:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:57.629 09:08:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:23:57.629 09:08:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:57.629 09:08:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:57.629 09:08:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:23:57.629 09:08:13 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:23:58.565 09:08:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:23:58.565 09:08:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:58.565 09:08:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:23:58.565 09:08:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:58.565 09:08:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:23:58.565 09:08:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:58.565 09:08:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:23:58.565 09:08:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:58.824 09:08:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:23:58.824 09:08:14 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:23:59.757 09:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:23:59.757 09:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:59.757 09:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:23:59.757 09:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:23:59.757 09:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:23:59.757 09:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:23:59.757 09:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:23:59.757 09:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:23:59.757 09:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:23:59.757 09:08:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:24:00.689 09:08:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:24:00.689 09:08:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:00.689 09:08:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:00.689 09:08:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:24:00.689 09:08:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:00.689 09:08:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:24:00.689 09:08:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:24:00.689 09:08:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:00.689 09:08:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:24:00.689 09:08:16 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:24:02.065 09:08:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:24:02.065 09:08:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:02.065 09:08:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:24:02.065 09:08:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:02.065 09:08:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:24:02.065 09:08:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:02.065 09:08:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:24:02.065 09:08:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:02.065 [2024-11-20 09:08:17.750429] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:02.065 [2024-11-20 09:08:17.750467] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.065 [2024-11-20 09:08:17.750478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.065 [2024-11-20 09:08:17.750487] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.065 [2024-11-20 09:08:17.750494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.065 [2024-11-20 09:08:17.750501] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.065 [2024-11-20 09:08:17.750508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.066 [2024-11-20 09:08:17.750515] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.066 [2024-11-20 09:08:17.750522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.066 [2024-11-20 09:08:17.750529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:02.066 [2024-11-20 09:08:17.750536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:02.066 [2024-11-20 09:08:17.750542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1423220 is same with the state(6) to be set 00:24:02.066 [2024-11-20 09:08:17.760452] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1423220 (9): Bad file descriptor 00:24:02.066 09:08:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:24:02.066 09:08:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:24:02.066 [2024-11-20 09:08:17.770485] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:24:02.066 [2024-11-20 09:08:17.770500] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:24:02.066 [2024-11-20 09:08:17.770504] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:02.066 [2024-11-20 09:08:17.770512] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:02.066 [2024-11-20 09:08:17.770534] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:03.001 09:08:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:24:03.001 09:08:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:03.001 09:08:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:24:03.001 09:08:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.001 09:08:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:24:03.001 09:08:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:03.001 09:08:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:24:03.001 [2024-11-20 09:08:18.800044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:03.001 [2024-11-20 09:08:18.800126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1423220 with addr=10.0.0.2, port=4420 00:24:03.001 [2024-11-20 09:08:18.800159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1423220 is same with the state(6) to be set 00:24:03.001 [2024-11-20 09:08:18.800212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1423220 (9): Bad file descriptor 00:24:03.001 [2024-11-20 09:08:18.801168] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:24:03.001 [2024-11-20 09:08:18.801232] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:03.001 [2024-11-20 09:08:18.801255] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:03.001 [2024-11-20 09:08:18.801277] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:03.001 [2024-11-20 09:08:18.801297] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:03.001 [2024-11-20 09:08:18.801313] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:03.001 [2024-11-20 09:08:18.801326] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:03.001 [2024-11-20 09:08:18.801348] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:24:03.001 [2024-11-20 09:08:18.801362] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:24:03.001 09:08:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.001 09:08:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:24:03.001 09:08:18 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:24:03.931 [2024-11-20 09:08:19.803883] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:24:03.931 [2024-11-20 09:08:19.803903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:24:03.931 [2024-11-20 09:08:19.803914] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:24:03.931 [2024-11-20 09:08:19.803921] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:24:03.931 [2024-11-20 09:08:19.803932] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:24:03.931 [2024-11-20 09:08:19.803938] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:24:03.931 [2024-11-20 09:08:19.803942] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:24:03.931 [2024-11-20 09:08:19.803951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:24:03.931 [2024-11-20 09:08:19.803971] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:03.931 [2024-11-20 09:08:19.803990] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.931 [2024-11-20 09:08:19.803998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.931 [2024-11-20 09:08:19.804007] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.931 [2024-11-20 09:08:19.804014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.931 [2024-11-20 09:08:19.804021] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.931 [2024-11-20 09:08:19.804028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.931 [2024-11-20 09:08:19.804035] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.931 [2024-11-20 09:08:19.804041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.931 [2024-11-20 09:08:19.804049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:03.931 [2024-11-20 09:08:19.804055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:03.931 [2024-11-20 09:08:19.804062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:24:03.931 [2024-11-20 09:08:19.804485] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1412900 (9): Bad file descriptor 00:24:03.931 [2024-11-20 09:08:19.805496] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:03.931 [2024-11-20 09:08:19.805507] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:24:03.931 09:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:24:03.931 09:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:03.931 09:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:24:03.931 09:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.931 09:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:24:03.931 09:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:03.931 09:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:24:03.931 09:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:03.931 09:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != '' ]] 00:24:03.931 09:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@77 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:24:03.931 09:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@78 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:24:03.931 09:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@81 -- # wait_for_bdev nvme1n1 00:24:03.931 09:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:24:03.931 09:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:03.931 09:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:24:03.931 09:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:03.932 09:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:24:03.932 09:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:03.932 09:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:24:03.932 09:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:04.189 09:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:04.189 09:08:19 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:24:05.122 09:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:24:05.122 09:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:05.122 09:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:24:05.122 09:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:05.122 09:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:24:05.122 09:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:05.122 09:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:24:05.122 09:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:05.122 09:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:05.122 09:08:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:24:06.057 [2024-11-20 09:08:21.821165] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:06.057 [2024-11-20 09:08:21.821183] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:06.057 [2024-11-20 09:08:21.821194] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:06.057 [2024-11-20 09:08:21.907458] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:24:06.057 [2024-11-20 09:08:22.003120] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:24:06.057 [2024-11-20 09:08:22.003752] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x14557c0:1 started. 00:24:06.057 [2024-11-20 09:08:22.004811] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:06.057 [2024-11-20 09:08:22.004845] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:06.057 [2024-11-20 09:08:22.004861] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:06.057 [2024-11-20 09:08:22.004874] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:24:06.057 [2024-11-20 09:08:22.004881] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:06.057 [2024-11-20 09:08:22.010226] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x14557c0 was disconnected and freed. delete nvme_qpair. 00:24:06.057 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:24:06.057 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:06.057 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:24:06.057 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:06.057 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:24:06.057 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:06.057 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:24:06.057 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:06.315 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:24:06.315 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:24:06.315 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@85 -- # killprocess 2446268 00:24:06.315 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2446268 ']' 00:24:06.315 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2446268 00:24:06.315 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:06.315 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:06.315 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2446268 00:24:06.315 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:06.315 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:06.315 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2446268' 00:24:06.315 killing process with pid 2446268 00:24:06.315 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2446268 00:24:06.315 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2446268 00:24:06.315 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # nvmftestfini 00:24:06.315 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:06.315 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@99 -- # sync 00:24:06.315 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:06.315 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@102 -- # set +e 00:24:06.315 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:06.315 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:06.315 rmmod nvme_tcp 00:24:06.315 rmmod nvme_fabrics 00:24:06.315 rmmod nvme_keyring 00:24:06.574 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:06.574 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@106 -- # set -e 00:24:06.574 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@107 -- # return 0 00:24:06.574 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # '[' -n 2446245 ']' 00:24:06.574 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@337 -- # killprocess 2446245 00:24:06.574 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2446245 ']' 00:24:06.574 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2446245 00:24:06.574 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:24:06.574 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:06.574 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2446245 00:24:06.574 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:24:06.574 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:24:06.574 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2446245' 00:24:06.574 killing process with pid 2446245 00:24:06.574 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2446245 00:24:06.574 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2446245 00:24:06.574 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:06.574 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # nvmf_fini 00:24:06.574 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@264 -- # local dev 00:24:06.574 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@267 -- # remove_target_ns 00:24:06.574 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:06.574 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:06.574 09:08:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@268 -- # delete_main_bridge 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@130 -- # return 0 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # _dev=0 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # dev_map=() 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@284 -- # iptr 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@542 -- # iptables-save 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@542 -- # iptables-restore 00:24:09.108 00:24:09.108 real 0m20.601s 00:24:09.108 user 0m24.781s 00:24:09.108 sys 0m5.889s 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:09.108 ************************************ 00:24:09.108 END TEST nvmf_discovery_remove_ifc 00:24:09.108 ************************************ 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:09.108 ************************************ 00:24:09.108 START TEST nvmf_identify_kernel_target 00:24:09.108 ************************************ 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:09.108 * Looking for test storage... 00:24:09.108 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:09.108 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:09.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.109 --rc genhtml_branch_coverage=1 00:24:09.109 --rc genhtml_function_coverage=1 00:24:09.109 --rc genhtml_legend=1 00:24:09.109 --rc geninfo_all_blocks=1 00:24:09.109 --rc geninfo_unexecuted_blocks=1 00:24:09.109 00:24:09.109 ' 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:09.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.109 --rc genhtml_branch_coverage=1 00:24:09.109 --rc genhtml_function_coverage=1 00:24:09.109 --rc genhtml_legend=1 00:24:09.109 --rc geninfo_all_blocks=1 00:24:09.109 --rc geninfo_unexecuted_blocks=1 00:24:09.109 00:24:09.109 ' 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:09.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.109 --rc genhtml_branch_coverage=1 00:24:09.109 --rc genhtml_function_coverage=1 00:24:09.109 --rc genhtml_legend=1 00:24:09.109 --rc geninfo_all_blocks=1 00:24:09.109 --rc geninfo_unexecuted_blocks=1 00:24:09.109 00:24:09.109 ' 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:09.109 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:09.109 --rc genhtml_branch_coverage=1 00:24:09.109 --rc genhtml_function_coverage=1 00:24:09.109 --rc genhtml_legend=1 00:24:09.109 --rc geninfo_all_blocks=1 00:24:09.109 --rc geninfo_unexecuted_blocks=1 00:24:09.109 00:24:09.109 ' 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@50 -- # : 0 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:24:09.109 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # remove_target_ns 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # xtrace_disable 00:24:09.109 09:08:24 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:15.675 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:15.675 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@131 -- # pci_devs=() 00:24:15.675 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:24:15.675 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:24:15.675 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:24:15.675 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@135 -- # net_devs=() 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@136 -- # e810=() 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@136 -- # local -ga e810 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@137 -- # x722=() 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@137 -- # local -ga x722 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@138 -- # mlx=() 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@138 -- # local -ga mlx 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:15.676 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:15.676 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:15.676 Found net devices under 0000:86:00.0: cvl_0_0 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:15.676 Found net devices under 0000:86:00.1: cvl_0_1 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # is_hw=yes 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@257 -- # create_target_ns 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@28 -- # local -g _dev 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # ips=() 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:24:15.676 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772161 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:24:15.677 10.0.0.1 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772162 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:24:15.677 10.0.0.2 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:15.677 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:15.677 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.458 ms 00:24:15.677 00:24:15.677 --- 10.0.0.1 ping statistics --- 00:24:15.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.677 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=target0 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:24:15.677 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:24:15.677 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:15.677 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:24:15.677 00:24:15.677 --- 10.0.0.2 ping statistics --- 00:24:15.677 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:15.677 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # (( pair++ )) 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # return 0 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=initiator1 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # return 1 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev= 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@169 -- # return 0 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=target0 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev target1 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=target1 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # return 1 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev= 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@169 -- # return 0 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # local block nvme 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:24:15.678 09:08:30 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # modprobe nvmet 00:24:15.678 09:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:15.678 09:08:31 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@449 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:18.251 Waiting for block devices as requested 00:24:18.251 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:18.251 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:18.251 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:18.251 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:18.251 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:18.251 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:18.546 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:18.546 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:18.546 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:18.546 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:18.546 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:18.827 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:18.827 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:18.827 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:19.086 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:19.086 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:19.086 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:19.086 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:24:19.087 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:19.087 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:24:19.087 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:19.087 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:19.087 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:19.087 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:24:19.087 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:19.087 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:19.346 No valid GPT data, bailing 00:24:19.346 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:19.346 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:19.346 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:19.346 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:24:19.346 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # [[ -b /dev/nvme0n1 ]] 00:24:19.347 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:19.347 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:19.347 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:19.347 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:19.347 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # echo 1 00:24:19.347 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # echo /dev/nvme0n1 00:24:19.347 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@471 -- # echo 1 00:24:19.347 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:24:19.347 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # echo tcp 00:24:19.347 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@475 -- # echo 4420 00:24:19.347 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # echo ipv4 00:24:19.347 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:19.347 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:19.347 00:24:19.347 Discovery Log Number of Records 2, Generation counter 2 00:24:19.347 =====Discovery Log Entry 0====== 00:24:19.347 trtype: tcp 00:24:19.347 adrfam: ipv4 00:24:19.347 subtype: current discovery subsystem 00:24:19.347 treq: not specified, sq flow control disable supported 00:24:19.347 portid: 1 00:24:19.347 trsvcid: 4420 00:24:19.347 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:19.347 traddr: 10.0.0.1 00:24:19.347 eflags: none 00:24:19.347 sectype: none 00:24:19.347 =====Discovery Log Entry 1====== 00:24:19.347 trtype: tcp 00:24:19.347 adrfam: ipv4 00:24:19.347 subtype: nvme subsystem 00:24:19.347 treq: not specified, sq flow control disable supported 00:24:19.347 portid: 1 00:24:19.347 trsvcid: 4420 00:24:19.347 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:19.347 traddr: 10.0.0.1 00:24:19.347 eflags: none 00:24:19.347 sectype: none 00:24:19.347 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:19.347 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:19.347 ===================================================== 00:24:19.347 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:19.347 ===================================================== 00:24:19.347 Controller Capabilities/Features 00:24:19.347 ================================ 00:24:19.347 Vendor ID: 0000 00:24:19.347 Subsystem Vendor ID: 0000 00:24:19.347 Serial Number: 0fb6b4dabad3202025c5 00:24:19.347 Model Number: Linux 00:24:19.347 Firmware Version: 6.8.9-20 00:24:19.347 Recommended Arb Burst: 0 00:24:19.347 IEEE OUI Identifier: 00 00 00 00:24:19.347 Multi-path I/O 00:24:19.347 May have multiple subsystem ports: No 00:24:19.347 May have multiple controllers: No 00:24:19.347 Associated with SR-IOV VF: No 00:24:19.347 Max Data Transfer Size: Unlimited 00:24:19.347 Max Number of Namespaces: 0 00:24:19.347 Max Number of I/O Queues: 1024 00:24:19.347 NVMe Specification Version (VS): 1.3 00:24:19.347 NVMe Specification Version (Identify): 1.3 00:24:19.347 Maximum Queue Entries: 1024 00:24:19.347 Contiguous Queues Required: No 00:24:19.347 Arbitration Mechanisms Supported 00:24:19.347 Weighted Round Robin: Not Supported 00:24:19.347 Vendor Specific: Not Supported 00:24:19.347 Reset Timeout: 7500 ms 00:24:19.347 Doorbell Stride: 4 bytes 00:24:19.347 NVM Subsystem Reset: Not Supported 00:24:19.347 Command Sets Supported 00:24:19.347 NVM Command Set: Supported 00:24:19.347 Boot Partition: Not Supported 00:24:19.347 Memory Page Size Minimum: 4096 bytes 00:24:19.347 Memory Page Size Maximum: 4096 bytes 00:24:19.347 Persistent Memory Region: Not Supported 00:24:19.347 Optional Asynchronous Events Supported 00:24:19.347 Namespace Attribute Notices: Not Supported 00:24:19.347 Firmware Activation Notices: Not Supported 00:24:19.347 ANA Change Notices: Not Supported 00:24:19.347 PLE Aggregate Log Change Notices: Not Supported 00:24:19.347 LBA Status Info Alert Notices: Not Supported 00:24:19.347 EGE Aggregate Log Change Notices: Not Supported 00:24:19.347 Normal NVM Subsystem Shutdown event: Not Supported 00:24:19.347 Zone Descriptor Change Notices: Not Supported 00:24:19.347 Discovery Log Change Notices: Supported 00:24:19.347 Controller Attributes 00:24:19.347 128-bit Host Identifier: Not Supported 00:24:19.347 Non-Operational Permissive Mode: Not Supported 00:24:19.347 NVM Sets: Not Supported 00:24:19.347 Read Recovery Levels: Not Supported 00:24:19.347 Endurance Groups: Not Supported 00:24:19.347 Predictable Latency Mode: Not Supported 00:24:19.347 Traffic Based Keep ALive: Not Supported 00:24:19.347 Namespace Granularity: Not Supported 00:24:19.347 SQ Associations: Not Supported 00:24:19.347 UUID List: Not Supported 00:24:19.347 Multi-Domain Subsystem: Not Supported 00:24:19.347 Fixed Capacity Management: Not Supported 00:24:19.347 Variable Capacity Management: Not Supported 00:24:19.347 Delete Endurance Group: Not Supported 00:24:19.347 Delete NVM Set: Not Supported 00:24:19.347 Extended LBA Formats Supported: Not Supported 00:24:19.347 Flexible Data Placement Supported: Not Supported 00:24:19.347 00:24:19.347 Controller Memory Buffer Support 00:24:19.347 ================================ 00:24:19.347 Supported: No 00:24:19.347 00:24:19.347 Persistent Memory Region Support 00:24:19.347 ================================ 00:24:19.347 Supported: No 00:24:19.347 00:24:19.347 Admin Command Set Attributes 00:24:19.347 ============================ 00:24:19.347 Security Send/Receive: Not Supported 00:24:19.347 Format NVM: Not Supported 00:24:19.347 Firmware Activate/Download: Not Supported 00:24:19.347 Namespace Management: Not Supported 00:24:19.347 Device Self-Test: Not Supported 00:24:19.347 Directives: Not Supported 00:24:19.347 NVMe-MI: Not Supported 00:24:19.347 Virtualization Management: Not Supported 00:24:19.347 Doorbell Buffer Config: Not Supported 00:24:19.347 Get LBA Status Capability: Not Supported 00:24:19.347 Command & Feature Lockdown Capability: Not Supported 00:24:19.347 Abort Command Limit: 1 00:24:19.347 Async Event Request Limit: 1 00:24:19.347 Number of Firmware Slots: N/A 00:24:19.347 Firmware Slot 1 Read-Only: N/A 00:24:19.607 Firmware Activation Without Reset: N/A 00:24:19.607 Multiple Update Detection Support: N/A 00:24:19.607 Firmware Update Granularity: No Information Provided 00:24:19.607 Per-Namespace SMART Log: No 00:24:19.607 Asymmetric Namespace Access Log Page: Not Supported 00:24:19.608 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:19.608 Command Effects Log Page: Not Supported 00:24:19.608 Get Log Page Extended Data: Supported 00:24:19.608 Telemetry Log Pages: Not Supported 00:24:19.608 Persistent Event Log Pages: Not Supported 00:24:19.608 Supported Log Pages Log Page: May Support 00:24:19.608 Commands Supported & Effects Log Page: Not Supported 00:24:19.608 Feature Identifiers & Effects Log Page:May Support 00:24:19.608 NVMe-MI Commands & Effects Log Page: May Support 00:24:19.608 Data Area 4 for Telemetry Log: Not Supported 00:24:19.608 Error Log Page Entries Supported: 1 00:24:19.608 Keep Alive: Not Supported 00:24:19.608 00:24:19.608 NVM Command Set Attributes 00:24:19.608 ========================== 00:24:19.608 Submission Queue Entry Size 00:24:19.608 Max: 1 00:24:19.608 Min: 1 00:24:19.608 Completion Queue Entry Size 00:24:19.608 Max: 1 00:24:19.608 Min: 1 00:24:19.608 Number of Namespaces: 0 00:24:19.608 Compare Command: Not Supported 00:24:19.608 Write Uncorrectable Command: Not Supported 00:24:19.608 Dataset Management Command: Not Supported 00:24:19.608 Write Zeroes Command: Not Supported 00:24:19.608 Set Features Save Field: Not Supported 00:24:19.608 Reservations: Not Supported 00:24:19.608 Timestamp: Not Supported 00:24:19.608 Copy: Not Supported 00:24:19.608 Volatile Write Cache: Not Present 00:24:19.608 Atomic Write Unit (Normal): 1 00:24:19.608 Atomic Write Unit (PFail): 1 00:24:19.608 Atomic Compare & Write Unit: 1 00:24:19.608 Fused Compare & Write: Not Supported 00:24:19.608 Scatter-Gather List 00:24:19.608 SGL Command Set: Supported 00:24:19.608 SGL Keyed: Not Supported 00:24:19.608 SGL Bit Bucket Descriptor: Not Supported 00:24:19.608 SGL Metadata Pointer: Not Supported 00:24:19.608 Oversized SGL: Not Supported 00:24:19.608 SGL Metadata Address: Not Supported 00:24:19.608 SGL Offset: Supported 00:24:19.608 Transport SGL Data Block: Not Supported 00:24:19.608 Replay Protected Memory Block: Not Supported 00:24:19.608 00:24:19.608 Firmware Slot Information 00:24:19.608 ========================= 00:24:19.608 Active slot: 0 00:24:19.608 00:24:19.608 00:24:19.608 Error Log 00:24:19.608 ========= 00:24:19.608 00:24:19.608 Active Namespaces 00:24:19.608 ================= 00:24:19.608 Discovery Log Page 00:24:19.608 ================== 00:24:19.608 Generation Counter: 2 00:24:19.608 Number of Records: 2 00:24:19.608 Record Format: 0 00:24:19.608 00:24:19.608 Discovery Log Entry 0 00:24:19.608 ---------------------- 00:24:19.608 Transport Type: 3 (TCP) 00:24:19.608 Address Family: 1 (IPv4) 00:24:19.608 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:19.608 Entry Flags: 00:24:19.608 Duplicate Returned Information: 0 00:24:19.608 Explicit Persistent Connection Support for Discovery: 0 00:24:19.608 Transport Requirements: 00:24:19.608 Secure Channel: Not Specified 00:24:19.608 Port ID: 1 (0x0001) 00:24:19.608 Controller ID: 65535 (0xffff) 00:24:19.608 Admin Max SQ Size: 32 00:24:19.608 Transport Service Identifier: 4420 00:24:19.608 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:19.608 Transport Address: 10.0.0.1 00:24:19.608 Discovery Log Entry 1 00:24:19.608 ---------------------- 00:24:19.608 Transport Type: 3 (TCP) 00:24:19.608 Address Family: 1 (IPv4) 00:24:19.608 Subsystem Type: 2 (NVM Subsystem) 00:24:19.608 Entry Flags: 00:24:19.608 Duplicate Returned Information: 0 00:24:19.608 Explicit Persistent Connection Support for Discovery: 0 00:24:19.608 Transport Requirements: 00:24:19.608 Secure Channel: Not Specified 00:24:19.608 Port ID: 1 (0x0001) 00:24:19.608 Controller ID: 65535 (0xffff) 00:24:19.608 Admin Max SQ Size: 32 00:24:19.608 Transport Service Identifier: 4420 00:24:19.608 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:19.608 Transport Address: 10.0.0.1 00:24:19.608 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:19.608 get_feature(0x01) failed 00:24:19.608 get_feature(0x02) failed 00:24:19.608 get_feature(0x04) failed 00:24:19.608 ===================================================== 00:24:19.608 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:19.608 ===================================================== 00:24:19.608 Controller Capabilities/Features 00:24:19.608 ================================ 00:24:19.608 Vendor ID: 0000 00:24:19.608 Subsystem Vendor ID: 0000 00:24:19.608 Serial Number: 0c9532525e3299dc8280 00:24:19.608 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:19.608 Firmware Version: 6.8.9-20 00:24:19.608 Recommended Arb Burst: 6 00:24:19.608 IEEE OUI Identifier: 00 00 00 00:24:19.608 Multi-path I/O 00:24:19.608 May have multiple subsystem ports: Yes 00:24:19.608 May have multiple controllers: Yes 00:24:19.608 Associated with SR-IOV VF: No 00:24:19.608 Max Data Transfer Size: Unlimited 00:24:19.608 Max Number of Namespaces: 1024 00:24:19.608 Max Number of I/O Queues: 128 00:24:19.608 NVMe Specification Version (VS): 1.3 00:24:19.608 NVMe Specification Version (Identify): 1.3 00:24:19.608 Maximum Queue Entries: 1024 00:24:19.608 Contiguous Queues Required: No 00:24:19.608 Arbitration Mechanisms Supported 00:24:19.608 Weighted Round Robin: Not Supported 00:24:19.608 Vendor Specific: Not Supported 00:24:19.608 Reset Timeout: 7500 ms 00:24:19.608 Doorbell Stride: 4 bytes 00:24:19.608 NVM Subsystem Reset: Not Supported 00:24:19.608 Command Sets Supported 00:24:19.608 NVM Command Set: Supported 00:24:19.608 Boot Partition: Not Supported 00:24:19.608 Memory Page Size Minimum: 4096 bytes 00:24:19.608 Memory Page Size Maximum: 4096 bytes 00:24:19.608 Persistent Memory Region: Not Supported 00:24:19.608 Optional Asynchronous Events Supported 00:24:19.608 Namespace Attribute Notices: Supported 00:24:19.608 Firmware Activation Notices: Not Supported 00:24:19.608 ANA Change Notices: Supported 00:24:19.608 PLE Aggregate Log Change Notices: Not Supported 00:24:19.608 LBA Status Info Alert Notices: Not Supported 00:24:19.608 EGE Aggregate Log Change Notices: Not Supported 00:24:19.608 Normal NVM Subsystem Shutdown event: Not Supported 00:24:19.608 Zone Descriptor Change Notices: Not Supported 00:24:19.608 Discovery Log Change Notices: Not Supported 00:24:19.608 Controller Attributes 00:24:19.608 128-bit Host Identifier: Supported 00:24:19.608 Non-Operational Permissive Mode: Not Supported 00:24:19.608 NVM Sets: Not Supported 00:24:19.608 Read Recovery Levels: Not Supported 00:24:19.608 Endurance Groups: Not Supported 00:24:19.608 Predictable Latency Mode: Not Supported 00:24:19.608 Traffic Based Keep ALive: Supported 00:24:19.608 Namespace Granularity: Not Supported 00:24:19.608 SQ Associations: Not Supported 00:24:19.608 UUID List: Not Supported 00:24:19.608 Multi-Domain Subsystem: Not Supported 00:24:19.608 Fixed Capacity Management: Not Supported 00:24:19.608 Variable Capacity Management: Not Supported 00:24:19.608 Delete Endurance Group: Not Supported 00:24:19.608 Delete NVM Set: Not Supported 00:24:19.608 Extended LBA Formats Supported: Not Supported 00:24:19.608 Flexible Data Placement Supported: Not Supported 00:24:19.608 00:24:19.608 Controller Memory Buffer Support 00:24:19.608 ================================ 00:24:19.608 Supported: No 00:24:19.608 00:24:19.608 Persistent Memory Region Support 00:24:19.608 ================================ 00:24:19.608 Supported: No 00:24:19.608 00:24:19.608 Admin Command Set Attributes 00:24:19.608 ============================ 00:24:19.608 Security Send/Receive: Not Supported 00:24:19.608 Format NVM: Not Supported 00:24:19.608 Firmware Activate/Download: Not Supported 00:24:19.608 Namespace Management: Not Supported 00:24:19.608 Device Self-Test: Not Supported 00:24:19.608 Directives: Not Supported 00:24:19.608 NVMe-MI: Not Supported 00:24:19.608 Virtualization Management: Not Supported 00:24:19.608 Doorbell Buffer Config: Not Supported 00:24:19.608 Get LBA Status Capability: Not Supported 00:24:19.608 Command & Feature Lockdown Capability: Not Supported 00:24:19.608 Abort Command Limit: 4 00:24:19.608 Async Event Request Limit: 4 00:24:19.608 Number of Firmware Slots: N/A 00:24:19.608 Firmware Slot 1 Read-Only: N/A 00:24:19.608 Firmware Activation Without Reset: N/A 00:24:19.608 Multiple Update Detection Support: N/A 00:24:19.609 Firmware Update Granularity: No Information Provided 00:24:19.609 Per-Namespace SMART Log: Yes 00:24:19.609 Asymmetric Namespace Access Log Page: Supported 00:24:19.609 ANA Transition Time : 10 sec 00:24:19.609 00:24:19.609 Asymmetric Namespace Access Capabilities 00:24:19.609 ANA Optimized State : Supported 00:24:19.609 ANA Non-Optimized State : Supported 00:24:19.609 ANA Inaccessible State : Supported 00:24:19.609 ANA Persistent Loss State : Supported 00:24:19.609 ANA Change State : Supported 00:24:19.609 ANAGRPID is not changed : No 00:24:19.609 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:19.609 00:24:19.609 ANA Group Identifier Maximum : 128 00:24:19.609 Number of ANA Group Identifiers : 128 00:24:19.609 Max Number of Allowed Namespaces : 1024 00:24:19.609 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:19.609 Command Effects Log Page: Supported 00:24:19.609 Get Log Page Extended Data: Supported 00:24:19.609 Telemetry Log Pages: Not Supported 00:24:19.609 Persistent Event Log Pages: Not Supported 00:24:19.609 Supported Log Pages Log Page: May Support 00:24:19.609 Commands Supported & Effects Log Page: Not Supported 00:24:19.609 Feature Identifiers & Effects Log Page:May Support 00:24:19.609 NVMe-MI Commands & Effects Log Page: May Support 00:24:19.609 Data Area 4 for Telemetry Log: Not Supported 00:24:19.609 Error Log Page Entries Supported: 128 00:24:19.609 Keep Alive: Supported 00:24:19.609 Keep Alive Granularity: 1000 ms 00:24:19.609 00:24:19.609 NVM Command Set Attributes 00:24:19.609 ========================== 00:24:19.609 Submission Queue Entry Size 00:24:19.609 Max: 64 00:24:19.609 Min: 64 00:24:19.609 Completion Queue Entry Size 00:24:19.609 Max: 16 00:24:19.609 Min: 16 00:24:19.609 Number of Namespaces: 1024 00:24:19.609 Compare Command: Not Supported 00:24:19.609 Write Uncorrectable Command: Not Supported 00:24:19.609 Dataset Management Command: Supported 00:24:19.609 Write Zeroes Command: Supported 00:24:19.609 Set Features Save Field: Not Supported 00:24:19.609 Reservations: Not Supported 00:24:19.609 Timestamp: Not Supported 00:24:19.609 Copy: Not Supported 00:24:19.609 Volatile Write Cache: Present 00:24:19.609 Atomic Write Unit (Normal): 1 00:24:19.609 Atomic Write Unit (PFail): 1 00:24:19.609 Atomic Compare & Write Unit: 1 00:24:19.609 Fused Compare & Write: Not Supported 00:24:19.609 Scatter-Gather List 00:24:19.609 SGL Command Set: Supported 00:24:19.609 SGL Keyed: Not Supported 00:24:19.609 SGL Bit Bucket Descriptor: Not Supported 00:24:19.609 SGL Metadata Pointer: Not Supported 00:24:19.609 Oversized SGL: Not Supported 00:24:19.609 SGL Metadata Address: Not Supported 00:24:19.609 SGL Offset: Supported 00:24:19.609 Transport SGL Data Block: Not Supported 00:24:19.609 Replay Protected Memory Block: Not Supported 00:24:19.609 00:24:19.609 Firmware Slot Information 00:24:19.609 ========================= 00:24:19.609 Active slot: 0 00:24:19.609 00:24:19.609 Asymmetric Namespace Access 00:24:19.609 =========================== 00:24:19.609 Change Count : 0 00:24:19.609 Number of ANA Group Descriptors : 1 00:24:19.609 ANA Group Descriptor : 0 00:24:19.609 ANA Group ID : 1 00:24:19.609 Number of NSID Values : 1 00:24:19.609 Change Count : 0 00:24:19.609 ANA State : 1 00:24:19.609 Namespace Identifier : 1 00:24:19.609 00:24:19.609 Commands Supported and Effects 00:24:19.609 ============================== 00:24:19.609 Admin Commands 00:24:19.609 -------------- 00:24:19.609 Get Log Page (02h): Supported 00:24:19.609 Identify (06h): Supported 00:24:19.609 Abort (08h): Supported 00:24:19.609 Set Features (09h): Supported 00:24:19.609 Get Features (0Ah): Supported 00:24:19.609 Asynchronous Event Request (0Ch): Supported 00:24:19.609 Keep Alive (18h): Supported 00:24:19.609 I/O Commands 00:24:19.609 ------------ 00:24:19.609 Flush (00h): Supported 00:24:19.609 Write (01h): Supported LBA-Change 00:24:19.609 Read (02h): Supported 00:24:19.609 Write Zeroes (08h): Supported LBA-Change 00:24:19.609 Dataset Management (09h): Supported 00:24:19.609 00:24:19.609 Error Log 00:24:19.609 ========= 00:24:19.609 Entry: 0 00:24:19.609 Error Count: 0x3 00:24:19.609 Submission Queue Id: 0x0 00:24:19.609 Command Id: 0x5 00:24:19.609 Phase Bit: 0 00:24:19.609 Status Code: 0x2 00:24:19.609 Status Code Type: 0x0 00:24:19.609 Do Not Retry: 1 00:24:19.609 Error Location: 0x28 00:24:19.609 LBA: 0x0 00:24:19.609 Namespace: 0x0 00:24:19.609 Vendor Log Page: 0x0 00:24:19.609 ----------- 00:24:19.609 Entry: 1 00:24:19.609 Error Count: 0x2 00:24:19.609 Submission Queue Id: 0x0 00:24:19.609 Command Id: 0x5 00:24:19.609 Phase Bit: 0 00:24:19.609 Status Code: 0x2 00:24:19.609 Status Code Type: 0x0 00:24:19.609 Do Not Retry: 1 00:24:19.609 Error Location: 0x28 00:24:19.609 LBA: 0x0 00:24:19.609 Namespace: 0x0 00:24:19.609 Vendor Log Page: 0x0 00:24:19.609 ----------- 00:24:19.609 Entry: 2 00:24:19.609 Error Count: 0x1 00:24:19.609 Submission Queue Id: 0x0 00:24:19.609 Command Id: 0x4 00:24:19.609 Phase Bit: 0 00:24:19.609 Status Code: 0x2 00:24:19.609 Status Code Type: 0x0 00:24:19.609 Do Not Retry: 1 00:24:19.609 Error Location: 0x28 00:24:19.609 LBA: 0x0 00:24:19.609 Namespace: 0x0 00:24:19.609 Vendor Log Page: 0x0 00:24:19.609 00:24:19.609 Number of Queues 00:24:19.609 ================ 00:24:19.609 Number of I/O Submission Queues: 128 00:24:19.609 Number of I/O Completion Queues: 128 00:24:19.609 00:24:19.609 ZNS Specific Controller Data 00:24:19.609 ============================ 00:24:19.609 Zone Append Size Limit: 0 00:24:19.609 00:24:19.609 00:24:19.609 Active Namespaces 00:24:19.609 ================= 00:24:19.609 get_feature(0x05) failed 00:24:19.609 Namespace ID:1 00:24:19.609 Command Set Identifier: NVM (00h) 00:24:19.609 Deallocate: Supported 00:24:19.609 Deallocated/Unwritten Error: Not Supported 00:24:19.609 Deallocated Read Value: Unknown 00:24:19.609 Deallocate in Write Zeroes: Not Supported 00:24:19.609 Deallocated Guard Field: 0xFFFF 00:24:19.609 Flush: Supported 00:24:19.609 Reservation: Not Supported 00:24:19.609 Namespace Sharing Capabilities: Multiple Controllers 00:24:19.609 Size (in LBAs): 1953525168 (931GiB) 00:24:19.609 Capacity (in LBAs): 1953525168 (931GiB) 00:24:19.609 Utilization (in LBAs): 1953525168 (931GiB) 00:24:19.609 UUID: 856099c9-d9ee-4e7a-8147-67cc3ce1fc27 00:24:19.609 Thin Provisioning: Not Supported 00:24:19.609 Per-NS Atomic Units: Yes 00:24:19.609 Atomic Boundary Size (Normal): 0 00:24:19.609 Atomic Boundary Size (PFail): 0 00:24:19.609 Atomic Boundary Offset: 0 00:24:19.609 NGUID/EUI64 Never Reused: No 00:24:19.609 ANA group ID: 1 00:24:19.609 Namespace Write Protected: No 00:24:19.609 Number of LBA Formats: 1 00:24:19.609 Current LBA Format: LBA Format #00 00:24:19.609 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:19.609 00:24:19.609 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:19.609 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:19.609 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@99 -- # sync 00:24:19.609 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:19.609 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # set +e 00:24:19.609 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:19.609 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:19.609 rmmod nvme_tcp 00:24:19.609 rmmod nvme_fabrics 00:24:19.609 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:19.609 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # set -e 00:24:19.609 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # return 0 00:24:19.609 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:24:19.609 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:19.609 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # nvmf_fini 00:24:19.609 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@264 -- # local dev 00:24:19.609 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@267 -- # remove_target_ns 00:24:19.609 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:19.609 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:19.609 09:08:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:22.144 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@268 -- # delete_main_bridge 00:24:22.144 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:22.144 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@130 -- # return 0 00:24:22.144 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:24:22.144 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:24:22.144 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:24:22.144 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:24:22.144 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:24:22.144 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:24:22.144 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:24:22.144 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:24:22.144 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:24:22.144 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:24:22.144 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:24:22.144 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:24:22.144 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:24:22.144 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:24:22.144 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:24:22.144 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:24:22.145 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:24:22.145 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@41 -- # _dev=0 00:24:22.145 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@41 -- # dev_map=() 00:24:22.145 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@284 -- # iptr 00:24:22.145 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@542 -- # iptables-save 00:24:22.145 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:24:22.145 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@542 -- # iptables-restore 00:24:22.145 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:22.145 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:22.145 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # echo 0 00:24:22.145 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:22.145 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:22.145 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:22.145 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:22.145 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:24:22.145 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:24:22.145 09:08:37 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:24.681 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:24.681 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:24.681 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:24.681 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:24.681 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:24.681 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:24.681 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:24.681 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:24.681 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:24.681 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:24.681 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:24.681 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:24.681 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:24.681 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:24.681 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:24.681 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:25.618 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:24:25.618 00:24:25.618 real 0m16.785s 00:24:25.618 user 0m4.413s 00:24:25.618 sys 0m8.789s 00:24:25.618 09:08:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:25.618 09:08:41 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:25.618 ************************************ 00:24:25.618 END TEST nvmf_identify_kernel_target 00:24:25.619 ************************************ 00:24:25.619 09:08:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:25.619 09:08:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:25.619 09:08:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:25.619 09:08:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:25.619 ************************************ 00:24:25.619 START TEST nvmf_auth_host 00:24:25.619 ************************************ 00:24:25.619 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:25.879 * Looking for test storage... 00:24:25.879 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:25.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.879 --rc genhtml_branch_coverage=1 00:24:25.879 --rc genhtml_function_coverage=1 00:24:25.879 --rc genhtml_legend=1 00:24:25.879 --rc geninfo_all_blocks=1 00:24:25.879 --rc geninfo_unexecuted_blocks=1 00:24:25.879 00:24:25.879 ' 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:25.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.879 --rc genhtml_branch_coverage=1 00:24:25.879 --rc genhtml_function_coverage=1 00:24:25.879 --rc genhtml_legend=1 00:24:25.879 --rc geninfo_all_blocks=1 00:24:25.879 --rc geninfo_unexecuted_blocks=1 00:24:25.879 00:24:25.879 ' 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:25.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.879 --rc genhtml_branch_coverage=1 00:24:25.879 --rc genhtml_function_coverage=1 00:24:25.879 --rc genhtml_legend=1 00:24:25.879 --rc geninfo_all_blocks=1 00:24:25.879 --rc geninfo_unexecuted_blocks=1 00:24:25.879 00:24:25.879 ' 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:25.879 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:25.879 --rc genhtml_branch_coverage=1 00:24:25.879 --rc genhtml_function_coverage=1 00:24:25.879 --rc genhtml_legend=1 00:24:25.879 --rc geninfo_all_blocks=1 00:24:25.879 --rc geninfo_unexecuted_blocks=1 00:24:25.879 00:24:25.879 ' 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.879 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@50 -- # : 0 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:24:25.880 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # remove_target_ns 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # xtrace_disable 00:24:25.880 09:08:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@131 -- # pci_devs=() 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@131 -- # local -a pci_devs 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@132 -- # pci_net_devs=() 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@133 -- # pci_drivers=() 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@133 -- # local -A pci_drivers 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@135 -- # net_devs=() 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@135 -- # local -ga net_devs 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@136 -- # e810=() 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@136 -- # local -ga e810 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@137 -- # x722=() 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@137 -- # local -ga x722 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@138 -- # mlx=() 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@138 -- # local -ga mlx 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:32.453 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:32.453 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:32.453 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:32.454 Found net devices under 0000:86:00.0: cvl_0_0 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:32.454 Found net devices under 0000:86:00.1: cvl_0_1 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # is_hw=yes 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@257 -- # create_target_ns 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@28 -- # local -g _dev 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # ips=() 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772161 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:24:32.454 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:24:32.455 10.0.0.1 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772162 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:24:32.455 10.0.0.2 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@38 -- # ping_ips 1 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=initiator0 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:32.455 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:32.455 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.307 ms 00:24:32.455 00:24:32.455 --- 10.0.0.1 ping statistics --- 00:24:32.455 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.455 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev target0 00:24:32.455 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=target0 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:24:32.456 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:32.456 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:24:32.456 00:24:32.456 --- 10.0.0.2 ping statistics --- 00:24:32.456 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:32.456 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # (( pair++ )) 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # return 0 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=initiator0 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=initiator1 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # return 1 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev= 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@169 -- # return 0 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev target0 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=target0 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:32.456 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev target1 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=target1 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # return 1 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev= 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@169 -- # return 0 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # nvmfpid=2458301 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # waitforlisten 2458301 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2458301 ']' 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:32.457 09:08:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=7bd9678b189be8bd6990523c98f6825d 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.QtQ 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 7bd9678b189be8bd6990523c98f6825d 0 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 7bd9678b189be8bd6990523c98f6825d 0 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=7bd9678b189be8bd6990523c98f6825d 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.QtQ 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.QtQ 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.QtQ 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha512 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=64 00:24:33.027 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:33.028 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=06e382e8a829077676c672a93ef14304cc18d4aa282e7182752486dc2593e3b6 00:24:33.028 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:24:33.028 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.E19 00:24:33.028 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 06e382e8a829077676c672a93ef14304cc18d4aa282e7182752486dc2593e3b6 3 00:24:33.028 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 06e382e8a829077676c672a93ef14304cc18d4aa282e7182752486dc2593e3b6 3 00:24:33.028 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:24:33.028 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:24:33.028 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=06e382e8a829077676c672a93ef14304cc18d4aa282e7182752486dc2593e3b6 00:24:33.028 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=3 00:24:33.028 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:24:33.028 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.E19 00:24:33.028 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.E19 00:24:33.028 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.E19 00:24:33.028 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:24:33.028 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:24:33.028 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:33.028 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:24:33.028 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:24:33.028 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:24:33.028 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:33.028 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=012d2a43ce917198790917d6d9354ec127a6a94659ae19b9 00:24:33.028 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:24:33.028 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.8UB 00:24:33.028 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 012d2a43ce917198790917d6d9354ec127a6a94659ae19b9 0 00:24:33.028 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 012d2a43ce917198790917d6d9354ec127a6a94659ae19b9 0 00:24:33.028 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:24:33.028 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:24:33.028 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=012d2a43ce917198790917d6d9354ec127a6a94659ae19b9 00:24:33.028 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:24:33.028 09:08:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:24:33.028 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.8UB 00:24:33.028 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.8UB 00:24:33.028 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.8UB 00:24:33.028 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:24:33.028 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:24:33.028 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:33.028 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:24:33.028 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha384 00:24:33.028 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:24:33.028 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:33.028 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=eb92194587e9812285975e9844729134911afdfb4fac9a09 00:24:33.028 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:24:33.028 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.WHs 00:24:33.028 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key eb92194587e9812285975e9844729134911afdfb4fac9a09 2 00:24:33.028 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 eb92194587e9812285975e9844729134911afdfb4fac9a09 2 00:24:33.028 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:24:33.028 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:24:33.028 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=eb92194587e9812285975e9844729134911afdfb4fac9a09 00:24:33.028 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=2 00:24:33.028 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:24:33.028 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.WHs 00:24:33.288 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.WHs 00:24:33.288 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.WHs 00:24:33.288 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:33.288 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:24:33.288 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha256 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=6311ad783a75f04a5895e6bcacc5968c 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.FGX 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 6311ad783a75f04a5895e6bcacc5968c 1 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 6311ad783a75f04a5895e6bcacc5968c 1 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=6311ad783a75f04a5895e6bcacc5968c 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=1 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.FGX 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.FGX 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.FGX 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha256 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=52a849b4dce9544918f1c27208d5f349 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.AaY 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 52a849b4dce9544918f1c27208d5f349 1 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 52a849b4dce9544918f1c27208d5f349 1 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=52a849b4dce9544918f1c27208d5f349 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=1 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.AaY 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.AaY 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.AaY 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha384 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=d704c3b80a05f4b7ded7d2b63fc455ae5c2789cbbc847209 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.osE 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key d704c3b80a05f4b7ded7d2b63fc455ae5c2789cbbc847209 2 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 d704c3b80a05f4b7ded7d2b63fc455ae5c2789cbbc847209 2 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=d704c3b80a05f4b7ded7d2b63fc455ae5c2789cbbc847209 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=2 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.osE 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.osE 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.osE 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=a27cb772605fafc6883346dc44116cc6 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:24:33.289 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.Vcr 00:24:33.290 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key a27cb772605fafc6883346dc44116cc6 0 00:24:33.290 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 a27cb772605fafc6883346dc44116cc6 0 00:24:33.290 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:24:33.290 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:24:33.290 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=a27cb772605fafc6883346dc44116cc6 00:24:33.290 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:24:33.290 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:24:33.290 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.Vcr 00:24:33.290 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.Vcr 00:24:33.290 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Vcr 00:24:33.290 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:24:33.290 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:24:33.290 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:24:33.290 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:24:33.290 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha512 00:24:33.290 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=64 00:24:33.290 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:24:33.290 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=eb926b296fafb8561ef8e61dcc811ad2ab90453dc0a9fc89bc6f0057b475bdab 00:24:33.290 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:24:33.290 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.PAr 00:24:33.290 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key eb926b296fafb8561ef8e61dcc811ad2ab90453dc0a9fc89bc6f0057b475bdab 3 00:24:33.290 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 eb926b296fafb8561ef8e61dcc811ad2ab90453dc0a9fc89bc6f0057b475bdab 3 00:24:33.290 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:24:33.290 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:24:33.290 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=eb926b296fafb8561ef8e61dcc811ad2ab90453dc0a9fc89bc6f0057b475bdab 00:24:33.290 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=3 00:24:33.290 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:24:33.549 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.PAr 00:24:33.549 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.PAr 00:24:33.549 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.PAr 00:24:33.549 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:24:33.549 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2458301 00:24:33.549 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 2458301 ']' 00:24:33.549 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.549 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:33.549 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.549 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:33.549 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.549 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:33.549 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:24:33.549 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:33.549 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.QtQ 00:24:33.549 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.549 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.549 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.549 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.E19 ]] 00:24:33.549 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.E19 00:24:33.549 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.549 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.808 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.8UB 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.WHs ]] 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.WHs 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.FGX 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.AaY ]] 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.AaY 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.osE 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Vcr ]] 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Vcr 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.PAr 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # local block nvme 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # modprobe nvmet 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:33.809 09:08:49 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@449 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:36.350 Waiting for block devices as requested 00:24:36.350 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:36.610 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:36.610 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:36.610 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:36.869 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:36.869 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:36.869 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:36.869 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:37.129 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:37.129 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:37.129 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:37.388 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:37.388 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:37.388 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:37.388 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:37.646 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:37.646 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:38.214 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:24:38.214 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:38.214 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:24:38.214 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:38.214 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:38.214 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:38.214 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:24:38.214 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:38.214 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:38.214 No valid GPT data, bailing 00:24:38.214 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:38.214 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:24:38.214 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:24:38.214 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:24:38.214 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # [[ -b /dev/nvme0n1 ]] 00:24:38.214 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:38.214 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:24:38.214 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:38.214 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:24:38.214 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # echo 1 00:24:38.214 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # echo /dev/nvme0n1 00:24:38.214 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@471 -- # echo 1 00:24:38.214 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:24:38.215 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # echo tcp 00:24:38.215 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@475 -- # echo 4420 00:24:38.215 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # echo ipv4 00:24:38.215 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:38.215 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:38.474 00:24:38.474 Discovery Log Number of Records 2, Generation counter 2 00:24:38.474 =====Discovery Log Entry 0====== 00:24:38.474 trtype: tcp 00:24:38.474 adrfam: ipv4 00:24:38.474 subtype: current discovery subsystem 00:24:38.474 treq: not specified, sq flow control disable supported 00:24:38.474 portid: 1 00:24:38.474 trsvcid: 4420 00:24:38.474 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:38.474 traddr: 10.0.0.1 00:24:38.474 eflags: none 00:24:38.474 sectype: none 00:24:38.474 =====Discovery Log Entry 1====== 00:24:38.474 trtype: tcp 00:24:38.474 adrfam: ipv4 00:24:38.474 subtype: nvme subsystem 00:24:38.474 treq: not specified, sq flow control disable supported 00:24:38.474 portid: 1 00:24:38.474 trsvcid: 4420 00:24:38.474 subnqn: nqn.2024-02.io.spdk:cnode0 00:24:38.474 traddr: 10.0.0.1 00:24:38.474 eflags: none 00:24:38.474 sectype: none 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: ]] 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.474 nvme0n1 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.474 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2JkOTY3OGIxODliZThiZDY5OTA1MjNjOThmNjgyNWRWfoUv: 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2JkOTY3OGIxODliZThiZDY5OTA1MjNjOThmNjgyNWRWfoUv: 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: ]] 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.734 nvme0n1 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: ]] 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.734 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.994 nvme0n1 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: ]] 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:38.994 09:08:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.254 nvme0n1 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDcwNGMzYjgwYTA1ZjRiN2RlZDdkMmI2M2ZjNDU1YWU1YzI3ODljYmJjODQ3MjA56JiqsA==: 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDcwNGMzYjgwYTA1ZjRiN2RlZDdkMmI2M2ZjNDU1YWU1YzI3ODljYmJjODQ3MjA56JiqsA==: 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: ]] 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.254 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.515 nvme0n1 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI5MjZiMjk2ZmFmYjg1NjFlZjhlNjFkY2M4MTFhZDJhYjkwNDUzZGMwYTlmYzg5YmM2ZjAwNTdiNDc1YmRhYlGnZvI=: 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI5MjZiMjk2ZmFmYjg1NjFlZjhlNjFkY2M4MTFhZDJhYjkwNDUzZGMwYTlmYzg5YmM2ZjAwNTdiNDc1YmRhYlGnZvI=: 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.515 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.775 nvme0n1 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2JkOTY3OGIxODliZThiZDY5OTA1MjNjOThmNjgyNWRWfoUv: 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2JkOTY3OGIxODliZThiZDY5OTA1MjNjOThmNjgyNWRWfoUv: 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: ]] 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:39.775 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.035 nvme0n1 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: ]] 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.035 09:08:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.035 nvme0n1 00:24:40.035 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.035 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.035 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.035 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.035 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.294 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.294 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.294 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.294 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.294 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.294 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: ]] 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.295 nvme0n1 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.295 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDcwNGMzYjgwYTA1ZjRiN2RlZDdkMmI2M2ZjNDU1YWU1YzI3ODljYmJjODQ3MjA56JiqsA==: 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDcwNGMzYjgwYTA1ZjRiN2RlZDdkMmI2M2ZjNDU1YWU1YzI3ODljYmJjODQ3MjA56JiqsA==: 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: ]] 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.554 nvme0n1 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.554 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.813 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI5MjZiMjk2ZmFmYjg1NjFlZjhlNjFkY2M4MTFhZDJhYjkwNDUzZGMwYTlmYzg5YmM2ZjAwNTdiNDc1YmRhYlGnZvI=: 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI5MjZiMjk2ZmFmYjg1NjFlZjhlNjFkY2M4MTFhZDJhYjkwNDUzZGMwYTlmYzg5YmM2ZjAwNTdiNDc1YmRhYlGnZvI=: 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.814 nvme0n1 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:40.814 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2JkOTY3OGIxODliZThiZDY5OTA1MjNjOThmNjgyNWRWfoUv: 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2JkOTY3OGIxODliZThiZDY5OTA1MjNjOThmNjgyNWRWfoUv: 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: ]] 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.073 09:08:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.332 nvme0n1 00:24:41.332 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.332 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.332 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.332 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.332 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.332 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.332 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.332 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.332 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.332 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.332 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.332 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.332 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:24:41.332 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.332 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:41.332 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:41.332 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:41.332 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:24:41.332 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:24:41.332 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:41.332 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:41.332 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:24:41.332 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: ]] 00:24:41.333 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:24:41.333 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:24:41.333 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.333 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:41.333 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:41.333 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:41.333 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.333 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:41.333 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.333 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.333 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.333 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:41.333 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.333 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.592 nvme0n1 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: ]] 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.592 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.851 nvme0n1 00:24:41.851 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.851 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:41.851 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:41.851 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.851 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.851 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.851 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:41.851 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:41.851 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.851 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.851 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.851 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:41.851 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:24:41.851 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:41.851 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:41.851 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:41.851 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:41.852 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDcwNGMzYjgwYTA1ZjRiN2RlZDdkMmI2M2ZjNDU1YWU1YzI3ODljYmJjODQ3MjA56JiqsA==: 00:24:41.852 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: 00:24:41.852 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:41.852 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:41.852 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDcwNGMzYjgwYTA1ZjRiN2RlZDdkMmI2M2ZjNDU1YWU1YzI3ODljYmJjODQ3MjA56JiqsA==: 00:24:41.852 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: ]] 00:24:41.852 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: 00:24:41.852 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:24:41.852 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:41.852 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:41.852 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:41.852 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:41.852 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:41.852 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:41.852 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.852 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.852 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:41.852 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:41.852 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:41.852 09:08:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.111 nvme0n1 00:24:42.111 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.111 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.111 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.111 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.111 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.111 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.111 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.111 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.111 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.111 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.111 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.111 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.111 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:24:42.111 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.111 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:42.111 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:42.111 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:42.111 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI5MjZiMjk2ZmFmYjg1NjFlZjhlNjFkY2M4MTFhZDJhYjkwNDUzZGMwYTlmYzg5YmM2ZjAwNTdiNDc1YmRhYlGnZvI=: 00:24:42.111 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:42.111 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:42.111 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:42.111 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI5MjZiMjk2ZmFmYjg1NjFlZjhlNjFkY2M4MTFhZDJhYjkwNDUzZGMwYTlmYzg5YmM2ZjAwNTdiNDc1YmRhYlGnZvI=: 00:24:42.111 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:42.112 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:24:42.112 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.112 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:42.112 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:42.112 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:42.112 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.112 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:24:42.112 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.112 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.370 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.370 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:42.370 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.370 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.629 nvme0n1 00:24:42.629 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.629 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.629 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.629 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.629 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.629 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.629 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.629 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.629 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.630 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.630 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.630 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:42.630 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.630 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:24:42.630 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.630 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:42.630 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:42.630 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:42.630 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2JkOTY3OGIxODliZThiZDY5OTA1MjNjOThmNjgyNWRWfoUv: 00:24:42.630 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: 00:24:42.630 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:42.630 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:42.630 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2JkOTY3OGIxODliZThiZDY5OTA1MjNjOThmNjgyNWRWfoUv: 00:24:42.630 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: ]] 00:24:42.630 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: 00:24:42.630 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:24:42.630 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.630 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:42.630 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:42.630 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:42.630 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.630 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:42.630 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.630 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.630 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.630 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:42.630 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.630 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.889 nvme0n1 00:24:42.889 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.889 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:42.889 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:42.889 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.889 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.889 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.889 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:42.889 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:42.889 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.889 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.889 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.889 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:42.889 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:24:42.889 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:42.889 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:42.889 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:42.889 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:42.889 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:24:42.889 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:24:42.889 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:42.889 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:42.889 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:24:42.890 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: ]] 00:24:42.890 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:24:42.890 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:24:42.890 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:42.890 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:42.890 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:42.890 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:42.890 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:42.890 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:42.890 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.890 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:42.890 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:42.890 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:42.890 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:42.890 09:08:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.459 nvme0n1 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: ]] 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.459 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.719 nvme0n1 00:24:43.719 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.719 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:43.719 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.719 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.719 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDcwNGMzYjgwYTA1ZjRiN2RlZDdkMmI2M2ZjNDU1YWU1YzI3ODljYmJjODQ3MjA56JiqsA==: 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDcwNGMzYjgwYTA1ZjRiN2RlZDdkMmI2M2ZjNDU1YWU1YzI3ODljYmJjODQ3MjA56JiqsA==: 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: ]] 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:43.979 09:08:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.239 nvme0n1 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI5MjZiMjk2ZmFmYjg1NjFlZjhlNjFkY2M4MTFhZDJhYjkwNDUzZGMwYTlmYzg5YmM2ZjAwNTdiNDc1YmRhYlGnZvI=: 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI5MjZiMjk2ZmFmYjg1NjFlZjhlNjFkY2M4MTFhZDJhYjkwNDUzZGMwYTlmYzg5YmM2ZjAwNTdiNDc1YmRhYlGnZvI=: 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.239 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.498 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.498 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:44.498 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.498 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.757 nvme0n1 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2JkOTY3OGIxODliZThiZDY5OTA1MjNjOThmNjgyNWRWfoUv: 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2JkOTY3OGIxODliZThiZDY5OTA1MjNjOThmNjgyNWRWfoUv: 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: ]] 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:44.757 09:09:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.326 nvme0n1 00:24:45.326 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.326 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:45.326 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:45.326 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.326 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.326 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.586 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:45.586 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:45.586 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.586 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.586 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.586 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:45.586 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:24:45.586 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:45.586 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:45.586 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:45.586 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:45.586 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:24:45.586 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:24:45.586 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:45.586 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:45.586 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:24:45.586 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: ]] 00:24:45.586 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:24:45.586 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:24:45.586 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:45.586 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:45.586 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:45.586 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:45.586 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:45.586 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:45.586 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.586 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:45.586 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:45.586 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:45.586 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:45.586 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.156 nvme0n1 00:24:46.156 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.156 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.156 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.156 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.156 09:09:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: ]] 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.156 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.726 nvme0n1 00:24:46.726 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.726 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:46.726 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:46.726 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.726 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.726 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.726 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:46.726 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:46.726 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.726 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.726 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.726 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:46.726 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:24:46.726 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:46.726 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:46.726 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:46.726 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:46.726 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDcwNGMzYjgwYTA1ZjRiN2RlZDdkMmI2M2ZjNDU1YWU1YzI3ODljYmJjODQ3MjA56JiqsA==: 00:24:46.726 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: 00:24:46.726 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:46.726 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:46.726 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDcwNGMzYjgwYTA1ZjRiN2RlZDdkMmI2M2ZjNDU1YWU1YzI3ODljYmJjODQ3MjA56JiqsA==: 00:24:46.727 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: ]] 00:24:46.727 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: 00:24:46.727 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:24:46.727 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:46.727 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:46.727 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:46.727 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:46.727 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:46.727 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:46.727 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.727 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:46.727 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:46.727 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:46.727 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:46.727 09:09:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.295 nvme0n1 00:24:47.295 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.295 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:47.295 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:47.295 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.295 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.295 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.555 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:47.555 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:47.555 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.555 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.555 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.555 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:47.555 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:24:47.556 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:47.556 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:24:47.556 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:47.556 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:47.556 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI5MjZiMjk2ZmFmYjg1NjFlZjhlNjFkY2M4MTFhZDJhYjkwNDUzZGMwYTlmYzg5YmM2ZjAwNTdiNDc1YmRhYlGnZvI=: 00:24:47.556 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:47.556 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:24:47.556 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:47.556 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI5MjZiMjk2ZmFmYjg1NjFlZjhlNjFkY2M4MTFhZDJhYjkwNDUzZGMwYTlmYzg5YmM2ZjAwNTdiNDc1YmRhYlGnZvI=: 00:24:47.556 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:47.556 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:24:47.556 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:47.556 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:24:47.556 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:47.556 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:47.556 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:47.556 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:24:47.556 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.556 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:47.556 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:47.556 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:47.556 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:47.556 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.126 nvme0n1 00:24:48.126 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.126 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.126 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.126 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.126 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.126 09:09:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2JkOTY3OGIxODliZThiZDY5OTA1MjNjOThmNjgyNWRWfoUv: 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2JkOTY3OGIxODliZThiZDY5OTA1MjNjOThmNjgyNWRWfoUv: 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: ]] 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.126 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.386 nvme0n1 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: ]] 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.386 nvme0n1 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.386 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.646 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.646 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.646 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: ]] 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.647 nvme0n1 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.647 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDcwNGMzYjgwYTA1ZjRiN2RlZDdkMmI2M2ZjNDU1YWU1YzI3ODljYmJjODQ3MjA56JiqsA==: 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDcwNGMzYjgwYTA1ZjRiN2RlZDdkMmI2M2ZjNDU1YWU1YzI3ODljYmJjODQ3MjA56JiqsA==: 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: ]] 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.907 nvme0n1 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:24:48.907 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:48.908 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:48.908 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:48.908 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:48.908 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI5MjZiMjk2ZmFmYjg1NjFlZjhlNjFkY2M4MTFhZDJhYjkwNDUzZGMwYTlmYzg5YmM2ZjAwNTdiNDc1YmRhYlGnZvI=: 00:24:48.908 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:48.908 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:48.908 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:48.908 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI5MjZiMjk2ZmFmYjg1NjFlZjhlNjFkY2M4MTFhZDJhYjkwNDUzZGMwYTlmYzg5YmM2ZjAwNTdiNDc1YmRhYlGnZvI=: 00:24:48.908 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:48.908 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:24:48.908 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:48.908 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:48.908 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:48.908 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:48.908 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:48.908 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:24:48.908 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.908 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:48.908 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:48.908 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:48.908 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:48.908 09:09:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.167 nvme0n1 00:24:49.167 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.167 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.167 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.167 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.167 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.167 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.167 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.167 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.167 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.167 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.167 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.168 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:49.168 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.168 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:24:49.168 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.168 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:49.168 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:49.168 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:49.168 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2JkOTY3OGIxODliZThiZDY5OTA1MjNjOThmNjgyNWRWfoUv: 00:24:49.168 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: 00:24:49.168 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:49.168 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:49.168 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2JkOTY3OGIxODliZThiZDY5OTA1MjNjOThmNjgyNWRWfoUv: 00:24:49.168 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: ]] 00:24:49.168 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: 00:24:49.168 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:24:49.168 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.168 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:49.168 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:49.168 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:49.168 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.168 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:49.168 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.168 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.168 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.168 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:49.168 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.168 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.427 nvme0n1 00:24:49.427 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.427 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.427 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.427 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.427 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.427 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.427 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.427 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.427 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.427 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.427 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.427 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.427 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:24:49.427 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.427 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:49.427 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:49.427 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:49.428 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:24:49.428 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:24:49.428 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:49.428 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:49.428 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:24:49.428 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: ]] 00:24:49.428 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:24:49.428 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:24:49.428 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.428 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:49.428 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:49.428 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:49.428 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.428 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:49.428 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.428 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.428 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.428 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:49.428 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.428 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.688 nvme0n1 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: ]] 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.688 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.948 nvme0n1 00:24:49.948 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.948 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:49.948 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:49.948 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.948 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.948 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.948 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:49.948 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:49.948 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.948 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.948 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.948 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:49.948 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:24:49.948 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:49.948 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:49.948 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:49.948 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:49.948 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDcwNGMzYjgwYTA1ZjRiN2RlZDdkMmI2M2ZjNDU1YWU1YzI3ODljYmJjODQ3MjA56JiqsA==: 00:24:49.948 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: 00:24:49.949 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:49.949 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:49.949 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDcwNGMzYjgwYTA1ZjRiN2RlZDdkMmI2M2ZjNDU1YWU1YzI3ODljYmJjODQ3MjA56JiqsA==: 00:24:49.949 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: ]] 00:24:49.949 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: 00:24:49.949 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:24:49.949 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:49.949 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:49.949 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:49.949 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:49.949 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:49.949 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:49.949 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.949 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:49.949 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:49.949 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:49.949 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:49.949 09:09:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.208 nvme0n1 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI5MjZiMjk2ZmFmYjg1NjFlZjhlNjFkY2M4MTFhZDJhYjkwNDUzZGMwYTlmYzg5YmM2ZjAwNTdiNDc1YmRhYlGnZvI=: 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI5MjZiMjk2ZmFmYjg1NjFlZjhlNjFkY2M4MTFhZDJhYjkwNDUzZGMwYTlmYzg5YmM2ZjAwNTdiNDc1YmRhYlGnZvI=: 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.208 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.467 nvme0n1 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2JkOTY3OGIxODliZThiZDY5OTA1MjNjOThmNjgyNWRWfoUv: 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2JkOTY3OGIxODliZThiZDY5OTA1MjNjOThmNjgyNWRWfoUv: 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: ]] 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.467 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.726 nvme0n1 00:24:50.726 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.726 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.726 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.726 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.726 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.726 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.726 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:50.726 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:50.726 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.726 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.726 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.726 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:50.726 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:24:50.726 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:50.726 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:50.726 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:50.726 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:50.726 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:24:50.726 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:24:50.726 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:50.726 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:50.726 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:24:50.726 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: ]] 00:24:50.726 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:24:50.726 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:24:50.726 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:50.726 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:50.726 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:50.726 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:50.726 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:50.727 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:50.727 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.727 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.727 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.727 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:50.727 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.727 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.985 nvme0n1 00:24:50.985 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:50.985 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:50.985 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:50.985 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:50.985 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:50.985 09:09:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.245 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.245 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.245 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.245 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.245 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.245 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.245 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:24:51.245 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.245 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:51.245 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:51.245 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:51.245 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:24:51.245 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:24:51.245 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:51.245 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:51.245 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:24:51.245 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: ]] 00:24:51.245 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:24:51.245 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:24:51.245 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.245 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:51.246 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:51.246 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:51.246 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.246 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:51.246 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.246 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.246 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.246 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:51.246 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.246 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.505 nvme0n1 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDcwNGMzYjgwYTA1ZjRiN2RlZDdkMmI2M2ZjNDU1YWU1YzI3ODljYmJjODQ3MjA56JiqsA==: 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDcwNGMzYjgwYTA1ZjRiN2RlZDdkMmI2M2ZjNDU1YWU1YzI3ODljYmJjODQ3MjA56JiqsA==: 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: ]] 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.505 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.764 nvme0n1 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI5MjZiMjk2ZmFmYjg1NjFlZjhlNjFkY2M4MTFhZDJhYjkwNDUzZGMwYTlmYzg5YmM2ZjAwNTdiNDc1YmRhYlGnZvI=: 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI5MjZiMjk2ZmFmYjg1NjFlZjhlNjFkY2M4MTFhZDJhYjkwNDUzZGMwYTlmYzg5YmM2ZjAwNTdiNDc1YmRhYlGnZvI=: 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:51.764 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.023 nvme0n1 00:24:52.023 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.023 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.023 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.024 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.024 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.024 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.024 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.024 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.024 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.024 09:09:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.024 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.024 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:52.024 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.024 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:24:52.024 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.024 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:52.024 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:52.024 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:52.024 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2JkOTY3OGIxODliZThiZDY5OTA1MjNjOThmNjgyNWRWfoUv: 00:24:52.024 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: 00:24:52.024 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:52.024 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:52.024 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2JkOTY3OGIxODliZThiZDY5OTA1MjNjOThmNjgyNWRWfoUv: 00:24:52.024 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: ]] 00:24:52.024 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: 00:24:52.024 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:24:52.024 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.024 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:52.024 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:52.024 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:52.024 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.024 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:52.024 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.024 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.024 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.024 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:52.024 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.024 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.591 nvme0n1 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: ]] 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.591 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.849 nvme0n1 00:24:52.849 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:52.849 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:52.849 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:52.849 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:52.849 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:52.849 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.106 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.106 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.106 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.106 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.106 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.106 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.106 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:24:53.106 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.106 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:53.106 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:53.106 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:53.106 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:24:53.106 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:24:53.106 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:53.106 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:53.107 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:24:53.107 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: ]] 00:24:53.107 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:24:53.107 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:24:53.107 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.107 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:53.107 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:53.107 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:53.107 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.107 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:53.107 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.107 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.107 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.107 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:53.107 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.107 09:09:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.365 nvme0n1 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDcwNGMzYjgwYTA1ZjRiN2RlZDdkMmI2M2ZjNDU1YWU1YzI3ODljYmJjODQ3MjA56JiqsA==: 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDcwNGMzYjgwYTA1ZjRiN2RlZDdkMmI2M2ZjNDU1YWU1YzI3ODljYmJjODQ3MjA56JiqsA==: 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: ]] 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.365 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.932 nvme0n1 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI5MjZiMjk2ZmFmYjg1NjFlZjhlNjFkY2M4MTFhZDJhYjkwNDUzZGMwYTlmYzg5YmM2ZjAwNTdiNDc1YmRhYlGnZvI=: 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI5MjZiMjk2ZmFmYjg1NjFlZjhlNjFkY2M4MTFhZDJhYjkwNDUzZGMwYTlmYzg5YmM2ZjAwNTdiNDc1YmRhYlGnZvI=: 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:53.932 09:09:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.190 nvme0n1 00:24:54.190 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.190 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:54.190 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:54.190 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.190 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2JkOTY3OGIxODliZThiZDY5OTA1MjNjOThmNjgyNWRWfoUv: 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2JkOTY3OGIxODliZThiZDY5OTA1MjNjOThmNjgyNWRWfoUv: 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: ]] 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:54.449 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.020 nvme0n1 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: ]] 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.020 09:09:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.588 nvme0n1 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: ]] 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:55.588 09:09:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.157 nvme0n1 00:24:56.157 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.157 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.157 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.157 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.157 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDcwNGMzYjgwYTA1ZjRiN2RlZDdkMmI2M2ZjNDU1YWU1YzI3ODljYmJjODQ3MjA56JiqsA==: 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDcwNGMzYjgwYTA1ZjRiN2RlZDdkMmI2M2ZjNDU1YWU1YzI3ODljYmJjODQ3MjA56JiqsA==: 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: ]] 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.416 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.984 nvme0n1 00:24:56.984 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.984 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:56.984 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:56.984 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.984 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.984 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.985 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:56.985 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:56.985 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.985 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.985 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.985 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:56.985 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:24:56.985 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:56.985 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:24:56.985 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:24:56.985 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:56.985 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI5MjZiMjk2ZmFmYjg1NjFlZjhlNjFkY2M4MTFhZDJhYjkwNDUzZGMwYTlmYzg5YmM2ZjAwNTdiNDc1YmRhYlGnZvI=: 00:24:56.985 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:56.985 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:24:56.985 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:24:56.985 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI5MjZiMjk2ZmFmYjg1NjFlZjhlNjFkY2M4MTFhZDJhYjkwNDUzZGMwYTlmYzg5YmM2ZjAwNTdiNDc1YmRhYlGnZvI=: 00:24:56.985 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:56.985 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:24:56.985 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:56.985 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:24:56.985 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:24:56.985 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:56.985 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:56.985 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:24:56.985 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.985 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:56.985 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:56.985 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:56.985 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:56.985 09:09:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.553 nvme0n1 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2JkOTY3OGIxODliZThiZDY5OTA1MjNjOThmNjgyNWRWfoUv: 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2JkOTY3OGIxODliZThiZDY5OTA1MjNjOThmNjgyNWRWfoUv: 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: ]] 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.553 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.812 nvme0n1 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: ]] 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:57.812 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:57.813 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:57.813 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:57.813 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.072 nvme0n1 00:24:58.072 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.072 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.072 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.072 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.072 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.072 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.072 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.072 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.072 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.072 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.072 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.072 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.072 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:24:58.072 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.072 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:58.072 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:58.072 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:58.072 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:24:58.072 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:24:58.072 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:58.072 09:09:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:58.072 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:24:58.072 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: ]] 00:24:58.072 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:24:58.072 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:24:58.072 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.072 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:58.072 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:58.072 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:58.072 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.072 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:58.072 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.072 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.072 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.072 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:58.072 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.072 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.331 nvme0n1 00:24:58.331 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.331 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.331 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.331 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.331 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.331 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.331 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.331 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.331 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.331 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.331 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.331 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.331 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:24:58.331 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.331 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:58.331 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:58.332 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:58.332 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDcwNGMzYjgwYTA1ZjRiN2RlZDdkMmI2M2ZjNDU1YWU1YzI3ODljYmJjODQ3MjA56JiqsA==: 00:24:58.332 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: 00:24:58.332 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:58.332 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:58.332 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDcwNGMzYjgwYTA1ZjRiN2RlZDdkMmI2M2ZjNDU1YWU1YzI3ODljYmJjODQ3MjA56JiqsA==: 00:24:58.332 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: ]] 00:24:58.332 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: 00:24:58.332 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:24:58.332 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.332 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:58.332 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:58.332 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:58.332 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.332 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:58.332 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.332 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.332 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.332 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:58.332 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.332 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.591 nvme0n1 00:24:58.591 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.591 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.591 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.591 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.591 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.591 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.591 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.591 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.591 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.591 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.591 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.591 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.591 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:24:58.591 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.591 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:58.591 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:24:58.591 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:58.591 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI5MjZiMjk2ZmFmYjg1NjFlZjhlNjFkY2M4MTFhZDJhYjkwNDUzZGMwYTlmYzg5YmM2ZjAwNTdiNDc1YmRhYlGnZvI=: 00:24:58.591 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:58.591 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:58.591 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:24:58.591 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI5MjZiMjk2ZmFmYjg1NjFlZjhlNjFkY2M4MTFhZDJhYjkwNDUzZGMwYTlmYzg5YmM2ZjAwNTdiNDc1YmRhYlGnZvI=: 00:24:58.591 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:58.591 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:24:58.591 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.591 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:58.591 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:24:58.591 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:58.591 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.592 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:24:58.592 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.592 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.592 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.592 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:58.592 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.592 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.592 nvme0n1 00:24:58.592 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.592 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.592 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.592 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.592 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.592 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2JkOTY3OGIxODliZThiZDY5OTA1MjNjOThmNjgyNWRWfoUv: 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2JkOTY3OGIxODliZThiZDY5OTA1MjNjOThmNjgyNWRWfoUv: 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: ]] 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.851 nvme0n1 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:58.851 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.111 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.111 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.111 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:24:59.111 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.111 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:59.111 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:59.111 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:24:59.111 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:24:59.111 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:24:59.111 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:59.111 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:59.111 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:24:59.111 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: ]] 00:24:59.111 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:24:59.111 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:24:59.111 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.111 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:59.111 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:59.111 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:24:59.111 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.111 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:59.111 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.111 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.111 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.111 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:24:59.111 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.111 09:09:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.111 nvme0n1 00:24:59.111 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.111 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.111 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.111 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.111 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.111 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.111 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.111 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.111 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.111 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: ]] 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.371 nvme0n1 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDcwNGMzYjgwYTA1ZjRiN2RlZDdkMmI2M2ZjNDU1YWU1YzI3ODljYmJjODQ3MjA56JiqsA==: 00:24:59.371 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDcwNGMzYjgwYTA1ZjRiN2RlZDdkMmI2M2ZjNDU1YWU1YzI3ODljYmJjODQ3MjA56JiqsA==: 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: ]] 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.631 nvme0n1 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI5MjZiMjk2ZmFmYjg1NjFlZjhlNjFkY2M4MTFhZDJhYjkwNDUzZGMwYTlmYzg5YmM2ZjAwNTdiNDc1YmRhYlGnZvI=: 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:24:59.631 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI5MjZiMjk2ZmFmYjg1NjFlZjhlNjFkY2M4MTFhZDJhYjkwNDUzZGMwYTlmYzg5YmM2ZjAwNTdiNDc1YmRhYlGnZvI=: 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.891 nvme0n1 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2JkOTY3OGIxODliZThiZDY5OTA1MjNjOThmNjgyNWRWfoUv: 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2JkOTY3OGIxODliZThiZDY5OTA1MjNjOThmNjgyNWRWfoUv: 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: ]] 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: 00:24:59.891 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:24:59.892 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:24:59.892 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:24:59.892 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:24:59.892 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:24:59.892 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:24:59.892 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:24:59.892 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.892 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.892 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:24:59.892 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:24:59.892 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:24:59.892 09:09:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.151 nvme0n1 00:25:00.151 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.151 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.151 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.151 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.151 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.151 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.410 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.410 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.410 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.410 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.410 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.410 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.410 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:00.410 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.410 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:00.410 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:00.410 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:00.410 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:25:00.410 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:25:00.410 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:00.410 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:00.410 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:25:00.410 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: ]] 00:25:00.410 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:25:00.410 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:00.410 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.410 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:00.410 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:00.410 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:00.410 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.410 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:00.410 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.410 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.410 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.410 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:00.410 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.410 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.670 nvme0n1 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: ]] 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.670 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.929 nvme0n1 00:25:00.929 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.929 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:00.929 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:00.929 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.929 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.929 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.929 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:00.929 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:00.929 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.929 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.929 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.929 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:00.929 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:00.929 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:00.929 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:00.929 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:00.929 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:00.929 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDcwNGMzYjgwYTA1ZjRiN2RlZDdkMmI2M2ZjNDU1YWU1YzI3ODljYmJjODQ3MjA56JiqsA==: 00:25:00.929 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: 00:25:00.929 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:00.929 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:00.929 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDcwNGMzYjgwYTA1ZjRiN2RlZDdkMmI2M2ZjNDU1YWU1YzI3ODljYmJjODQ3MjA56JiqsA==: 00:25:00.929 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: ]] 00:25:00.929 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: 00:25:00.929 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:00.929 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:00.930 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:00.930 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:00.930 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:00.930 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:00.930 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:00.930 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.930 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:00.930 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:00.930 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:00.930 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:00.930 09:09:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.190 nvme0n1 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI5MjZiMjk2ZmFmYjg1NjFlZjhlNjFkY2M4MTFhZDJhYjkwNDUzZGMwYTlmYzg5YmM2ZjAwNTdiNDc1YmRhYlGnZvI=: 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI5MjZiMjk2ZmFmYjg1NjFlZjhlNjFkY2M4MTFhZDJhYjkwNDUzZGMwYTlmYzg5YmM2ZjAwNTdiNDc1YmRhYlGnZvI=: 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.190 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.450 nvme0n1 00:25:01.450 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.450 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.450 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.450 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.450 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.450 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2JkOTY3OGIxODliZThiZDY5OTA1MjNjOThmNjgyNWRWfoUv: 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2JkOTY3OGIxODliZThiZDY5OTA1MjNjOThmNjgyNWRWfoUv: 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: ]] 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.710 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.970 nvme0n1 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: ]] 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:01.970 09:09:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.540 nvme0n1 00:25:02.540 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.540 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.540 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.540 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.540 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.540 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.541 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.541 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.541 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.541 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.541 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.541 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.541 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:02.541 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.541 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:02.541 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:02.541 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:02.541 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:25:02.541 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:25:02.541 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:02.541 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:02.541 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:25:02.541 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: ]] 00:25:02.541 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:25:02.541 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:02.541 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.541 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:02.541 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:02.541 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:02.541 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.541 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:02.541 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.541 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.541 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.541 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:02.541 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.541 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.929 nvme0n1 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDcwNGMzYjgwYTA1ZjRiN2RlZDdkMmI2M2ZjNDU1YWU1YzI3ODljYmJjODQ3MjA56JiqsA==: 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDcwNGMzYjgwYTA1ZjRiN2RlZDdkMmI2M2ZjNDU1YWU1YzI3ODljYmJjODQ3MjA56JiqsA==: 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: ]] 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:02.929 09:09:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.565 nvme0n1 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI5MjZiMjk2ZmFmYjg1NjFlZjhlNjFkY2M4MTFhZDJhYjkwNDUzZGMwYTlmYzg5YmM2ZjAwNTdiNDc1YmRhYlGnZvI=: 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI5MjZiMjk2ZmFmYjg1NjFlZjhlNjFkY2M4MTFhZDJhYjkwNDUzZGMwYTlmYzg5YmM2ZjAwNTdiNDc1YmRhYlGnZvI=: 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.565 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.824 nvme0n1 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:N2JkOTY3OGIxODliZThiZDY5OTA1MjNjOThmNjgyNWRWfoUv: 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:N2JkOTY3OGIxODliZThiZDY5OTA1MjNjOThmNjgyNWRWfoUv: 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: ]] 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:MDZlMzgyZThhODI5MDc3Njc2YzY3MmE5M2VmMTQzMDRjYzE4ZDRhYTI4MmU3MTgyNzUyNDg2ZGMyNTkzZTNiNr2cV8g=: 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:03.824 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.825 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:03.825 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:03.825 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:03.825 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:03.825 09:09:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.392 nvme0n1 00:25:04.392 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.392 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:04.392 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:04.392 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.392 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.392 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.650 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:04.650 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:04.650 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.650 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.650 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.650 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:04.650 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:04.650 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:04.650 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:04.650 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:04.650 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:04.650 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:25:04.650 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:25:04.650 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:04.650 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:04.650 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:25:04.650 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: ]] 00:25:04.651 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:25:04.651 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:04.651 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:04.651 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:04.651 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:04.651 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:04.651 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:04.651 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:04.651 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.651 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:04.651 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:04.651 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:04.651 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:04.651 09:09:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.217 nvme0n1 00:25:05.217 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.217 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.217 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.217 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.217 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.217 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.217 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.217 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.217 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.217 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.217 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.217 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.217 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:05.217 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.217 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:05.217 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:05.217 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:05.217 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:25:05.217 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:25:05.217 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:05.217 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:05.217 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:25:05.217 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: ]] 00:25:05.217 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:25:05.217 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:05.217 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.217 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:05.217 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:05.217 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:05.217 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.218 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:05.218 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.218 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.218 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.218 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:05.218 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.218 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.786 nvme0n1 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:ZDcwNGMzYjgwYTA1ZjRiN2RlZDdkMmI2M2ZjNDU1YWU1YzI3ODljYmJjODQ3MjA56JiqsA==: 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:ZDcwNGMzYjgwYTA1ZjRiN2RlZDdkMmI2M2ZjNDU1YWU1YzI3ODljYmJjODQ3MjA56JiqsA==: 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: ]] 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YTI3Y2I3NzI2MDVmYWZjNjg4MzM0NmRjNDQxMTZjYzZSxJPr: 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:05.786 09:09:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.354 nvme0n1 00:25:06.354 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.354 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:06.354 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:06.354 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.354 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.354 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.613 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:06.613 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:06.613 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.613 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.613 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.613 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:06.613 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:06.613 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:06.613 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:06.613 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:06.613 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:06.613 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:ZWI5MjZiMjk2ZmFmYjg1NjFlZjhlNjFkY2M4MTFhZDJhYjkwNDUzZGMwYTlmYzg5YmM2ZjAwNTdiNDc1YmRhYlGnZvI=: 00:25:06.613 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:06.613 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:06.613 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:06.613 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:ZWI5MjZiMjk2ZmFmYjg1NjFlZjhlNjFkY2M4MTFhZDJhYjkwNDUzZGMwYTlmYzg5YmM2ZjAwNTdiNDc1YmRhYlGnZvI=: 00:25:06.613 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:06.613 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:06.613 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:06.613 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:06.613 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:06.613 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:06.613 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:06.613 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:06.613 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.613 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.613 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.613 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:06.613 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.613 09:09:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.181 nvme0n1 00:25:07.181 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.181 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.181 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.181 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: ]] 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.182 request: 00:25:07.182 { 00:25:07.182 "name": "nvme0", 00:25:07.182 "trtype": "tcp", 00:25:07.182 "traddr": "10.0.0.1", 00:25:07.182 "adrfam": "ipv4", 00:25:07.182 "trsvcid": "4420", 00:25:07.182 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:07.182 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:07.182 "prchk_reftag": false, 00:25:07.182 "prchk_guard": false, 00:25:07.182 "hdgst": false, 00:25:07.182 "ddgst": false, 00:25:07.182 "allow_unrecognized_csi": false, 00:25:07.182 "method": "bdev_nvme_attach_controller", 00:25:07.182 "req_id": 1 00:25:07.182 } 00:25:07.182 Got JSON-RPC error response 00:25:07.182 response: 00:25:07.182 { 00:25:07.182 "code": -5, 00:25:07.182 "message": "Input/output error" 00:25:07.182 } 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.182 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.441 request: 00:25:07.441 { 00:25:07.441 "name": "nvme0", 00:25:07.441 "trtype": "tcp", 00:25:07.441 "traddr": "10.0.0.1", 00:25:07.441 "adrfam": "ipv4", 00:25:07.441 "trsvcid": "4420", 00:25:07.441 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:07.441 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:07.441 "prchk_reftag": false, 00:25:07.441 "prchk_guard": false, 00:25:07.441 "hdgst": false, 00:25:07.441 "ddgst": false, 00:25:07.441 "dhchap_key": "key2", 00:25:07.441 "allow_unrecognized_csi": false, 00:25:07.441 "method": "bdev_nvme_attach_controller", 00:25:07.441 "req_id": 1 00:25:07.441 } 00:25:07.441 Got JSON-RPC error response 00:25:07.441 response: 00:25:07.441 { 00:25:07.441 "code": -5, 00:25:07.441 "message": "Input/output error" 00:25:07.441 } 00:25:07.441 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:07.441 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:07.441 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:07.441 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:07.441 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:07.441 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.441 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:07.441 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.441 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.441 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.441 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:07.441 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:07.441 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:07.441 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:07.442 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:07.442 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:07.442 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:07.442 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:07.442 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:07.442 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.442 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.442 request: 00:25:07.442 { 00:25:07.442 "name": "nvme0", 00:25:07.442 "trtype": "tcp", 00:25:07.442 "traddr": "10.0.0.1", 00:25:07.442 "adrfam": "ipv4", 00:25:07.442 "trsvcid": "4420", 00:25:07.442 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:07.442 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:07.442 "prchk_reftag": false, 00:25:07.442 "prchk_guard": false, 00:25:07.442 "hdgst": false, 00:25:07.442 "ddgst": false, 00:25:07.442 "dhchap_key": "key1", 00:25:07.442 "dhchap_ctrlr_key": "ckey2", 00:25:07.442 "allow_unrecognized_csi": false, 00:25:07.442 "method": "bdev_nvme_attach_controller", 00:25:07.442 "req_id": 1 00:25:07.442 } 00:25:07.442 Got JSON-RPC error response 00:25:07.442 response: 00:25:07.442 { 00:25:07.442 "code": -5, 00:25:07.442 "message": "Input/output error" 00:25:07.442 } 00:25:07.442 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:07.442 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:07.442 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:07.442 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:07.442 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:07.442 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:07.442 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.442 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.701 nvme0n1 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: ]] 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.701 request: 00:25:07.701 { 00:25:07.701 "name": "nvme0", 00:25:07.701 "dhchap_key": "key1", 00:25:07.701 "dhchap_ctrlr_key": "ckey2", 00:25:07.701 "method": "bdev_nvme_set_keys", 00:25:07.701 "req_id": 1 00:25:07.701 } 00:25:07.701 Got JSON-RPC error response 00:25:07.701 response: 00:25:07.701 { 00:25:07.701 "code": -13, 00:25:07.701 "message": "Permission denied" 00:25:07.701 } 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:07.701 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:07.960 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:07.960 09:09:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:08.897 09:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:08.897 09:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:08.897 09:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:08.897 09:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:08.897 09:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:08.897 09:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:08.897 09:09:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:09.832 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:09.832 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:09.832 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.832 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:09.832 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:09.832 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:25:09.832 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:09.832 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:09.833 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:09.833 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:09.833 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:09.833 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:25:09.833 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:25:09.833 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:09.833 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:09.833 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:MDEyZDJhNDNjZTkxNzE5ODc5MDkxN2Q2ZDkzNTRlYzEyN2E2YTk0NjU5YWUxOWI5G389kQ==: 00:25:09.833 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: ]] 00:25:09.833 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZWI5MjE5NDU4N2U5ODEyMjg1OTc1ZTk4NDQ3MjkxMzQ5MTFhZmRmYjRmYWM5YTA5pjGJvg==: 00:25:09.833 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:09.833 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:09.833 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.091 nvme0n1 00:25:10.091 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.091 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:10.091 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.091 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:10.091 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:10.091 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:10.091 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:25:10.091 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:25:10.091 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:10.091 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:10.091 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:NjMxMWFkNzgzYTc1ZjA0YTU4OTVlNmJjYWNjNTk2OGMYoJ01: 00:25:10.091 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: ]] 00:25:10.091 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:NTJhODQ5YjRkY2U5NTQ0OTE4ZjFjMjcyMDhkNWYzNDl2jc+0: 00:25:10.091 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:10.091 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:10.091 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:10.091 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:10.091 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:10.091 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:10.091 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:10.091 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:10.091 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.091 09:09:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.091 request: 00:25:10.091 { 00:25:10.091 "name": "nvme0", 00:25:10.091 "dhchap_key": "key2", 00:25:10.091 "dhchap_ctrlr_key": "ckey1", 00:25:10.091 "method": "bdev_nvme_set_keys", 00:25:10.091 "req_id": 1 00:25:10.091 } 00:25:10.091 Got JSON-RPC error response 00:25:10.092 response: 00:25:10.092 { 00:25:10.092 "code": -13, 00:25:10.092 "message": "Permission denied" 00:25:10.092 } 00:25:10.092 09:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:10.092 09:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:10.092 09:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:10.092 09:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:10.092 09:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:10.092 09:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:10.092 09:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:10.092 09:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.092 09:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.092 09:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:10.092 09:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:25:10.092 09:09:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:11.468 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.468 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:11.468 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.468 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.468 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.468 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:25:11.468 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:25:11.468 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:25:11.468 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:11.468 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:11.468 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@99 -- # sync 00:25:11.468 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:11.468 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # set +e 00:25:11.468 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:11.468 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:11.468 rmmod nvme_tcp 00:25:11.468 rmmod nvme_fabrics 00:25:11.468 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:11.468 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # set -e 00:25:11.468 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # return 0 00:25:11.468 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # '[' -n 2458301 ']' 00:25:11.468 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@337 -- # killprocess 2458301 00:25:11.468 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 2458301 ']' 00:25:11.469 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 2458301 00:25:11.469 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:25:11.469 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:11.469 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2458301 00:25:11.469 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:11.469 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:11.469 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2458301' 00:25:11.469 killing process with pid 2458301 00:25:11.469 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 2458301 00:25:11.469 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 2458301 00:25:11.469 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:11.469 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # nvmf_fini 00:25:11.469 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@264 -- # local dev 00:25:11.469 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@267 -- # remove_target_ns 00:25:11.469 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:11.469 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:11.469 09:09:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@268 -- # delete_main_bridge 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@130 -- # return 0 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@41 -- # _dev=0 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@41 -- # dev_map=() 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@284 -- # iptr 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@542 -- # iptables-save 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@542 -- # iptables-restore 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # echo 0 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:25:14.005 09:09:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:16.542 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:16.542 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:16.542 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:16.542 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:16.542 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:16.542 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:16.542 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:16.542 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:16.542 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:16.542 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:16.542 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:16.542 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:16.542 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:16.542 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:16.542 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:16.542 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:17.479 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:17.479 09:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.QtQ /tmp/spdk.key-null.8UB /tmp/spdk.key-sha256.FGX /tmp/spdk.key-sha384.osE /tmp/spdk.key-sha512.PAr /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:17.479 09:09:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:20.767 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:25:20.767 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:20.767 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:25:20.767 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:25:20.767 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:25:20.767 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:25:20.767 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:25:20.767 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:25:20.767 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:25:20.767 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:25:20.767 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:25:20.767 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:25:20.767 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:25:20.767 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:25:20.767 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:25:20.767 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:25:20.767 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:20.767 00:25:20.767 real 0m54.738s 00:25:20.767 user 0m49.242s 00:25:20.767 sys 0m12.674s 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.767 ************************************ 00:25:20.767 END TEST nvmf_auth_host 00:25:20.767 ************************************ 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.767 ************************************ 00:25:20.767 START TEST nvmf_bdevperf 00:25:20.767 ************************************ 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:20.767 * Looking for test storage... 00:25:20.767 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:20.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.767 --rc genhtml_branch_coverage=1 00:25:20.767 --rc genhtml_function_coverage=1 00:25:20.767 --rc genhtml_legend=1 00:25:20.767 --rc geninfo_all_blocks=1 00:25:20.767 --rc geninfo_unexecuted_blocks=1 00:25:20.767 00:25:20.767 ' 00:25:20.767 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:20.767 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.767 --rc genhtml_branch_coverage=1 00:25:20.767 --rc genhtml_function_coverage=1 00:25:20.767 --rc genhtml_legend=1 00:25:20.767 --rc geninfo_all_blocks=1 00:25:20.767 --rc geninfo_unexecuted_blocks=1 00:25:20.767 00:25:20.767 ' 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:20.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.768 --rc genhtml_branch_coverage=1 00:25:20.768 --rc genhtml_function_coverage=1 00:25:20.768 --rc genhtml_legend=1 00:25:20.768 --rc geninfo_all_blocks=1 00:25:20.768 --rc geninfo_unexecuted_blocks=1 00:25:20.768 00:25:20.768 ' 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:20.768 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:20.768 --rc genhtml_branch_coverage=1 00:25:20.768 --rc genhtml_function_coverage=1 00:25:20.768 --rc genhtml_legend=1 00:25:20.768 --rc geninfo_all_blocks=1 00:25:20.768 --rc geninfo_unexecuted_blocks=1 00:25:20.768 00:25:20.768 ' 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@50 -- # : 0 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:25:20.768 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # remove_target_ns 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # xtrace_disable 00:25:20.768 09:09:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@131 -- # pci_devs=() 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@131 -- # local -a pci_devs 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@132 -- # pci_net_devs=() 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@133 -- # pci_drivers=() 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@133 -- # local -A pci_drivers 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@135 -- # net_devs=() 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@135 -- # local -ga net_devs 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@136 -- # e810=() 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@136 -- # local -ga e810 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@137 -- # x722=() 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@137 -- # local -ga x722 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@138 -- # mlx=() 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@138 -- # local -ga mlx 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:27.338 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:27.338 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:27.338 Found net devices under 0000:86:00.0: cvl_0_0 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:27.338 Found net devices under 0000:86:00.1: cvl_0_1 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # is_hw=yes 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:25:27.338 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@257 -- # create_target_ns 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@28 -- # local -g _dev 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@44 -- # ips=() 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@11 -- # local val=167772161 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:25:27.339 10.0.0.1 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@11 -- # local val=167772162 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:25:27.339 10.0.0.2 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@38 -- # ping_ips 1 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@107 -- # local dev=initiator0 00:25:27.339 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:27.340 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:27.340 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.459 ms 00:25:27.340 00:25:27.340 --- 10.0.0.1 ping statistics --- 00:25:27.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.340 rtt min/avg/max/mdev = 0.459/0.459/0.459/0.000 ms 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@107 -- # local dev=target0 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:25:27.340 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:27.340 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.199 ms 00:25:27.340 00:25:27.340 --- 10.0.0.2 ping statistics --- 00:25:27.340 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:27.340 rtt min/avg/max/mdev = 0.199/0.199/0.199/0.000 ms 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # (( pair++ )) 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # return 0 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@107 -- # local dev=initiator0 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@107 -- # local dev=initiator1 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # return 1 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # dev= 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@169 -- # return 0 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:27.340 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@107 -- # local dev=target0 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # get_net_dev target1 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@107 -- # local dev=target1 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # return 1 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # dev= 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@169 -- # return 0 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # nvmfpid=2472736 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # waitforlisten 2472736 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2472736 ']' 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.341 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:27.341 [2024-11-20 09:09:42.698600] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:25:27.341 [2024-11-20 09:09:42.698653] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:27.341 [2024-11-20 09:09:42.777878] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:27.341 [2024-11-20 09:09:42.821108] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:27.341 [2024-11-20 09:09:42.821147] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:27.341 [2024-11-20 09:09:42.821154] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:27.341 [2024-11-20 09:09:42.821160] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:27.341 [2024-11-20 09:09:42.821165] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:27.341 [2024-11-20 09:09:42.822660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:27.341 [2024-11-20 09:09:42.822766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:27.341 [2024-11-20 09:09:42.822767] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:27.341 [2024-11-20 09:09:42.959370] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:27.341 Malloc0 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.341 09:09:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:27.341 09:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.341 09:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:27.341 09:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.341 09:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:27.341 09:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.341 09:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:27.341 09:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.341 09:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:27.341 [2024-11-20 09:09:43.019066] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:27.341 09:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.341 09:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:25:27.341 09:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:25:27.341 09:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # config=() 00:25:27.341 09:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # local subsystem config 00:25:27.342 09:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:27.342 09:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:27.342 { 00:25:27.342 "params": { 00:25:27.342 "name": "Nvme$subsystem", 00:25:27.342 "trtype": "$TEST_TRANSPORT", 00:25:27.342 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:27.342 "adrfam": "ipv4", 00:25:27.342 "trsvcid": "$NVMF_PORT", 00:25:27.342 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:27.342 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:27.342 "hdgst": ${hdgst:-false}, 00:25:27.342 "ddgst": ${ddgst:-false} 00:25:27.342 }, 00:25:27.342 "method": "bdev_nvme_attach_controller" 00:25:27.342 } 00:25:27.342 EOF 00:25:27.342 )") 00:25:27.342 09:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # cat 00:25:27.342 09:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # jq . 00:25:27.342 09:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@397 -- # IFS=, 00:25:27.342 09:09:43 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:25:27.342 "params": { 00:25:27.342 "name": "Nvme1", 00:25:27.342 "trtype": "tcp", 00:25:27.342 "traddr": "10.0.0.2", 00:25:27.342 "adrfam": "ipv4", 00:25:27.342 "trsvcid": "4420", 00:25:27.342 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:27.342 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:27.342 "hdgst": false, 00:25:27.342 "ddgst": false 00:25:27.342 }, 00:25:27.342 "method": "bdev_nvme_attach_controller" 00:25:27.342 }' 00:25:27.342 [2024-11-20 09:09:43.072763] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:25:27.342 [2024-11-20 09:09:43.072808] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2472765 ] 00:25:27.342 [2024-11-20 09:09:43.149388] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:27.342 [2024-11-20 09:09:43.190848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:27.601 Running I/O for 1 seconds... 00:25:28.537 11046.00 IOPS, 43.15 MiB/s 00:25:28.537 Latency(us) 00:25:28.537 [2024-11-20T08:09:44.578Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:28.537 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:28.537 Verification LBA range: start 0x0 length 0x4000 00:25:28.537 Nvme1n1 : 1.01 11087.04 43.31 0.00 0.00 11500.31 1032.90 15386.71 00:25:28.537 [2024-11-20T08:09:44.578Z] =================================================================================================================== 00:25:28.537 [2024-11-20T08:09:44.578Z] Total : 11087.04 43.31 0.00 0.00 11500.31 1032.90 15386.71 00:25:28.537 09:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2473000 00:25:28.537 09:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:25:28.537 09:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:25:28.537 09:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:25:28.537 09:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # config=() 00:25:28.537 09:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # local subsystem config 00:25:28.537 09:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:25:28.537 09:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:25:28.537 { 00:25:28.537 "params": { 00:25:28.537 "name": "Nvme$subsystem", 00:25:28.537 "trtype": "$TEST_TRANSPORT", 00:25:28.537 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:28.537 "adrfam": "ipv4", 00:25:28.537 "trsvcid": "$NVMF_PORT", 00:25:28.537 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:28.537 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:28.537 "hdgst": ${hdgst:-false}, 00:25:28.537 "ddgst": ${ddgst:-false} 00:25:28.537 }, 00:25:28.537 "method": "bdev_nvme_attach_controller" 00:25:28.537 } 00:25:28.537 EOF 00:25:28.537 )") 00:25:28.537 09:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # cat 00:25:28.537 09:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # jq . 00:25:28.537 09:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@397 -- # IFS=, 00:25:28.537 09:09:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:25:28.538 "params": { 00:25:28.538 "name": "Nvme1", 00:25:28.538 "trtype": "tcp", 00:25:28.538 "traddr": "10.0.0.2", 00:25:28.538 "adrfam": "ipv4", 00:25:28.538 "trsvcid": "4420", 00:25:28.538 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:25:28.538 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:25:28.538 "hdgst": false, 00:25:28.538 "ddgst": false 00:25:28.538 }, 00:25:28.538 "method": "bdev_nvme_attach_controller" 00:25:28.538 }' 00:25:28.797 [2024-11-20 09:09:44.606311] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:25:28.797 [2024-11-20 09:09:44.606361] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2473000 ] 00:25:28.797 [2024-11-20 09:09:44.681754] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.797 [2024-11-20 09:09:44.720766] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:25:29.056 Running I/O for 15 seconds... 00:25:31.371 10977.00 IOPS, 42.88 MiB/s [2024-11-20T08:09:47.673Z] 11096.50 IOPS, 43.35 MiB/s [2024-11-20T08:09:47.673Z] 09:09:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2472736 00:25:31.632 09:09:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:25:31.632 [2024-11-20 09:09:47.573981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:96024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.632 [2024-11-20 09:09:47.574019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.632 [2024-11-20 09:09:47.574038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:96032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.632 [2024-11-20 09:09:47.574047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.632 [2024-11-20 09:09:47.574057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:96040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.632 [2024-11-20 09:09:47.574065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.632 [2024-11-20 09:09:47.574075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:96048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.632 [2024-11-20 09:09:47.574084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.632 [2024-11-20 09:09:47.574093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:96056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.632 [2024-11-20 09:09:47.574101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.632 [2024-11-20 09:09:47.574109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:96064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.632 [2024-11-20 09:09:47.574117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.632 [2024-11-20 09:09:47.574126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:96072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.632 [2024-11-20 09:09:47.574134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.632 [2024-11-20 09:09:47.574145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:96080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.632 [2024-11-20 09:09:47.574152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.632 [2024-11-20 09:09:47.574165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:96088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:96104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:96112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:96120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:96128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:96136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:96144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:96152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:96160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:96168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:96176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:96184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:96192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:96200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:96208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:96216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:96224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:96232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:96240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:96248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:96256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:96264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:96272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:96280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:96288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:96296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:96304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:96312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:96320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:96328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:96336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:96344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:96352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:96360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:96368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:96376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:96384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.633 [2024-11-20 09:09:47.574768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:96392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.633 [2024-11-20 09:09:47.574774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.574782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:96400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.574790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.574798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:96408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.574804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.574812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:96416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.574819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.574827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:96424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.574834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.574842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:96432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.574848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.574856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:96440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.574862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.574871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:96448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.574877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.574885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:96456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.574892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.574900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:96464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.574906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.574914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:96472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.574920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.574929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:96480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.574935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.574943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:96488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.575066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.575075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:96496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.575082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.575091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:96504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.575097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.575105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:96512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.575112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.575120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:96520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.575127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.575135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:96528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.575142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.575150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:96536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.575156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.575165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:96544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.575171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.575180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:96552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.575188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.575196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:96560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.575203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.575211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:96568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.575218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.575226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:96576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.575233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.575241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:96584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.575248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.575258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:96592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.575264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.575273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:96600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.575279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.575288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:96608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.575294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.575302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:96616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.575309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.575317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:96624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.575324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.575334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:96632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.575340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.575348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:96640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.575355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.575363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:96648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.575369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.575377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:96656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.575384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.575393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:96664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.575400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.575408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:96672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.575415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.575423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:96680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.575429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.575438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:96688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.575446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.575454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:96696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.575461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.575469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:96704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.575475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.634 [2024-11-20 09:09:47.575484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:96712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.634 [2024-11-20 09:09:47.575493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:96720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:96728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:96736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:96744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:96760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:96768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:96776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:96784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:96792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:96800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:96808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:96816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:96824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:96832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:96840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:96848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:96856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:96864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:96872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:96880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:96896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:96904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:96912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:96920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:96928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:96936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:96944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:96952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:96960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:96968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.575992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:96976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.575999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.576007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:96984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.576013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.576021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:96992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.576030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.576038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:97000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.576045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.576053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:97008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.576059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.576067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:97016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.576074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.576082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:97024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.635 [2024-11-20 09:09:47.576089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.635 [2024-11-20 09:09:47.576097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:97032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:25:31.636 [2024-11-20 09:09:47.576103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.636 [2024-11-20 09:09:47.576111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1da7e10 is same with the state(6) to be set 00:25:31.636 [2024-11-20 09:09:47.576119] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:25:31.636 [2024-11-20 09:09:47.576125] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:25:31.636 [2024-11-20 09:09:47.576131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:97040 len:8 PRP1 0x0 PRP2 0x0 00:25:31.636 [2024-11-20 09:09:47.576138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:25:31.636 [2024-11-20 09:09:47.579046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.636 [2024-11-20 09:09:47.579100] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:31.636 [2024-11-20 09:09:47.579700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.636 [2024-11-20 09:09:47.579717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:31.636 [2024-11-20 09:09:47.579725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:31.636 [2024-11-20 09:09:47.579902] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:31.636 [2024-11-20 09:09:47.580085] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.636 [2024-11-20 09:09:47.580094] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.636 [2024-11-20 09:09:47.580102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.636 [2024-11-20 09:09:47.580109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.636 [2024-11-20 09:09:47.592230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.636 [2024-11-20 09:09:47.592694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.636 [2024-11-20 09:09:47.592712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:31.636 [2024-11-20 09:09:47.592720] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:31.636 [2024-11-20 09:09:47.592892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:31.636 [2024-11-20 09:09:47.593071] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.636 [2024-11-20 09:09:47.593080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.636 [2024-11-20 09:09:47.593087] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.636 [2024-11-20 09:09:47.593094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.636 [2024-11-20 09:09:47.605129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.636 [2024-11-20 09:09:47.605588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.636 [2024-11-20 09:09:47.605606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:31.636 [2024-11-20 09:09:47.605614] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:31.636 [2024-11-20 09:09:47.605785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:31.636 [2024-11-20 09:09:47.605964] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.636 [2024-11-20 09:09:47.605973] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.636 [2024-11-20 09:09:47.605980] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.636 [2024-11-20 09:09:47.605986] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.636 [2024-11-20 09:09:47.617922] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.636 [2024-11-20 09:09:47.618278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.636 [2024-11-20 09:09:47.618295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:31.636 [2024-11-20 09:09:47.618303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:31.636 [2024-11-20 09:09:47.618464] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:31.636 [2024-11-20 09:09:47.618627] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.636 [2024-11-20 09:09:47.618634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.636 [2024-11-20 09:09:47.618641] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.636 [2024-11-20 09:09:47.618646] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.636 [2024-11-20 09:09:47.630814] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.636 [2024-11-20 09:09:47.631242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.636 [2024-11-20 09:09:47.631259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:31.636 [2024-11-20 09:09:47.631269] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:31.636 [2024-11-20 09:09:47.631441] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:31.636 [2024-11-20 09:09:47.631611] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.636 [2024-11-20 09:09:47.631619] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.636 [2024-11-20 09:09:47.631626] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.636 [2024-11-20 09:09:47.631632] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.636 [2024-11-20 09:09:47.643720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.636 [2024-11-20 09:09:47.644164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.636 [2024-11-20 09:09:47.644217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:31.636 [2024-11-20 09:09:47.644241] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:31.636 [2024-11-20 09:09:47.644802] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:31.636 [2024-11-20 09:09:47.645197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.636 [2024-11-20 09:09:47.645215] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.636 [2024-11-20 09:09:47.645229] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.636 [2024-11-20 09:09:47.645243] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.636 [2024-11-20 09:09:47.658875] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.636 [2024-11-20 09:09:47.659309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.636 [2024-11-20 09:09:47.659331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:31.636 [2024-11-20 09:09:47.659341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:31.636 [2024-11-20 09:09:47.659593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:31.636 [2024-11-20 09:09:47.659846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.636 [2024-11-20 09:09:47.659857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.636 [2024-11-20 09:09:47.659866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.636 [2024-11-20 09:09:47.659875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.896 [2024-11-20 09:09:47.671953] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.896 [2024-11-20 09:09:47.672380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.896 [2024-11-20 09:09:47.672397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:31.896 [2024-11-20 09:09:47.672405] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:31.896 [2024-11-20 09:09:47.672575] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:31.896 [2024-11-20 09:09:47.672747] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.896 [2024-11-20 09:09:47.672758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.896 [2024-11-20 09:09:47.672765] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.896 [2024-11-20 09:09:47.672771] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.896 [2024-11-20 09:09:47.684950] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.896 [2024-11-20 09:09:47.685338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.896 [2024-11-20 09:09:47.685355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:31.896 [2024-11-20 09:09:47.685362] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:31.896 [2024-11-20 09:09:47.685533] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:31.896 [2024-11-20 09:09:47.685704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.896 [2024-11-20 09:09:47.685713] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.896 [2024-11-20 09:09:47.685719] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.896 [2024-11-20 09:09:47.685725] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.896 [2024-11-20 09:09:47.697871] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.896 [2024-11-20 09:09:47.698244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.896 [2024-11-20 09:09:47.698261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:31.896 [2024-11-20 09:09:47.698268] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:31.896 [2024-11-20 09:09:47.698439] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:31.896 [2024-11-20 09:09:47.698610] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.896 [2024-11-20 09:09:47.698618] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.896 [2024-11-20 09:09:47.698625] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.896 [2024-11-20 09:09:47.698631] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.896 [2024-11-20 09:09:47.710774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.896 [2024-11-20 09:09:47.711228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.896 [2024-11-20 09:09:47.711246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:31.896 [2024-11-20 09:09:47.711254] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:31.896 [2024-11-20 09:09:47.711424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:31.896 [2024-11-20 09:09:47.711595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.896 [2024-11-20 09:09:47.711603] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.896 [2024-11-20 09:09:47.711609] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.896 [2024-11-20 09:09:47.711618] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.896 [2024-11-20 09:09:47.723648] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.896 [2024-11-20 09:09:47.724056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.896 [2024-11-20 09:09:47.724072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:31.896 [2024-11-20 09:09:47.724079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:31.896 [2024-11-20 09:09:47.724241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:31.896 [2024-11-20 09:09:47.724402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.896 [2024-11-20 09:09:47.724410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.896 [2024-11-20 09:09:47.724416] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.896 [2024-11-20 09:09:47.724422] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.896 [2024-11-20 09:09:47.736441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.896 [2024-11-20 09:09:47.736871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.896 [2024-11-20 09:09:47.736904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:31.896 [2024-11-20 09:09:47.736927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:31.896 [2024-11-20 09:09:47.737518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:31.896 [2024-11-20 09:09:47.737740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.896 [2024-11-20 09:09:47.737748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.896 [2024-11-20 09:09:47.737755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.897 [2024-11-20 09:09:47.737761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.897 [2024-11-20 09:09:47.749222] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.897 [2024-11-20 09:09:47.749670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.897 [2024-11-20 09:09:47.749687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:31.897 [2024-11-20 09:09:47.749694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:31.897 [2024-11-20 09:09:47.749865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:31.897 [2024-11-20 09:09:47.750041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.897 [2024-11-20 09:09:47.750050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.897 [2024-11-20 09:09:47.750056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.897 [2024-11-20 09:09:47.750062] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.897 [2024-11-20 09:09:47.762041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.897 [2024-11-20 09:09:47.762451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.897 [2024-11-20 09:09:47.762497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:31.897 [2024-11-20 09:09:47.762521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:31.897 [2024-11-20 09:09:47.762961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:31.897 [2024-11-20 09:09:47.763149] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.897 [2024-11-20 09:09:47.763157] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.897 [2024-11-20 09:09:47.763164] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.897 [2024-11-20 09:09:47.763170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.897 [2024-11-20 09:09:47.774876] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.897 [2024-11-20 09:09:47.775306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.897 [2024-11-20 09:09:47.775323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:31.897 [2024-11-20 09:09:47.775330] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:31.897 [2024-11-20 09:09:47.775501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:31.897 [2024-11-20 09:09:47.775672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.897 [2024-11-20 09:09:47.775680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.897 [2024-11-20 09:09:47.775686] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.897 [2024-11-20 09:09:47.775693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.897 [2024-11-20 09:09:47.787862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.897 [2024-11-20 09:09:47.788274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.897 [2024-11-20 09:09:47.788291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:31.897 [2024-11-20 09:09:47.788298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:31.897 [2024-11-20 09:09:47.788469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:31.897 [2024-11-20 09:09:47.788640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.897 [2024-11-20 09:09:47.788649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.897 [2024-11-20 09:09:47.788655] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.897 [2024-11-20 09:09:47.788661] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.897 [2024-11-20 09:09:47.800849] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.897 [2024-11-20 09:09:47.801258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.897 [2024-11-20 09:09:47.801276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:31.897 [2024-11-20 09:09:47.801286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:31.897 [2024-11-20 09:09:47.801458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:31.897 [2024-11-20 09:09:47.801631] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.897 [2024-11-20 09:09:47.801640] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.897 [2024-11-20 09:09:47.801646] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.897 [2024-11-20 09:09:47.801652] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.897 [2024-11-20 09:09:47.813746] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.897 [2024-11-20 09:09:47.814124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.897 [2024-11-20 09:09:47.814142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:31.897 [2024-11-20 09:09:47.814149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:31.897 [2024-11-20 09:09:47.814332] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:31.897 [2024-11-20 09:09:47.814504] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.897 [2024-11-20 09:09:47.814512] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.897 [2024-11-20 09:09:47.814518] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.897 [2024-11-20 09:09:47.814525] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.897 [2024-11-20 09:09:47.826734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.897 [2024-11-20 09:09:47.827174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.897 [2024-11-20 09:09:47.827191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:31.897 [2024-11-20 09:09:47.827198] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:31.897 [2024-11-20 09:09:47.827374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:31.897 [2024-11-20 09:09:47.827558] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.897 [2024-11-20 09:09:47.827567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.897 [2024-11-20 09:09:47.827573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.897 [2024-11-20 09:09:47.827580] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.897 [2024-11-20 09:09:47.839917] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.897 [2024-11-20 09:09:47.840262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.897 [2024-11-20 09:09:47.840279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:31.897 [2024-11-20 09:09:47.840287] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:31.897 [2024-11-20 09:09:47.840463] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:31.897 [2024-11-20 09:09:47.840640] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.897 [2024-11-20 09:09:47.840652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.897 [2024-11-20 09:09:47.840659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.897 [2024-11-20 09:09:47.840667] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.897 [2024-11-20 09:09:47.852989] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.897 [2024-11-20 09:09:47.853400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.897 [2024-11-20 09:09:47.853417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:31.897 [2024-11-20 09:09:47.853426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:31.897 [2024-11-20 09:09:47.853603] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:31.897 [2024-11-20 09:09:47.853779] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.897 [2024-11-20 09:09:47.853787] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.897 [2024-11-20 09:09:47.853794] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.897 [2024-11-20 09:09:47.853801] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.897 [2024-11-20 09:09:47.866154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.897 [2024-11-20 09:09:47.866563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.897 [2024-11-20 09:09:47.866580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:31.897 [2024-11-20 09:09:47.866588] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:31.897 [2024-11-20 09:09:47.866764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:31.897 [2024-11-20 09:09:47.866940] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.897 [2024-11-20 09:09:47.866955] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.898 [2024-11-20 09:09:47.866962] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.898 [2024-11-20 09:09:47.866968] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.898 [2024-11-20 09:09:47.879282] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.898 [2024-11-20 09:09:47.879715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.898 [2024-11-20 09:09:47.879732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:31.898 [2024-11-20 09:09:47.879739] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:31.898 [2024-11-20 09:09:47.879915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:31.898 [2024-11-20 09:09:47.880096] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.898 [2024-11-20 09:09:47.880104] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.898 [2024-11-20 09:09:47.880111] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.898 [2024-11-20 09:09:47.880121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.898 [2024-11-20 09:09:47.892439] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.898 [2024-11-20 09:09:47.892875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.898 [2024-11-20 09:09:47.892892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:31.898 [2024-11-20 09:09:47.892899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:31.898 [2024-11-20 09:09:47.893081] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:31.898 [2024-11-20 09:09:47.893257] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.898 [2024-11-20 09:09:47.893265] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.898 [2024-11-20 09:09:47.893272] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.898 [2024-11-20 09:09:47.893278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.898 [2024-11-20 09:09:47.905599] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.898 [2024-11-20 09:09:47.906045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.898 [2024-11-20 09:09:47.906063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:31.898 [2024-11-20 09:09:47.906070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:31.898 [2024-11-20 09:09:47.906246] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:31.898 [2024-11-20 09:09:47.906423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.898 [2024-11-20 09:09:47.906431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.898 [2024-11-20 09:09:47.906437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.898 [2024-11-20 09:09:47.906444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.898 [2024-11-20 09:09:47.918767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.898 [2024-11-20 09:09:47.919207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.898 [2024-11-20 09:09:47.919225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:31.898 [2024-11-20 09:09:47.919232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:31.898 [2024-11-20 09:09:47.919408] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:31.898 [2024-11-20 09:09:47.919584] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.898 [2024-11-20 09:09:47.919592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.898 [2024-11-20 09:09:47.919599] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.898 [2024-11-20 09:09:47.919605] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:31.898 [2024-11-20 09:09:47.931923] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:31.898 [2024-11-20 09:09:47.932295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:31.898 [2024-11-20 09:09:47.932313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:31.898 [2024-11-20 09:09:47.932320] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:31.898 [2024-11-20 09:09:47.932495] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:31.898 [2024-11-20 09:09:47.932672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:31.898 [2024-11-20 09:09:47.932680] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:31.898 [2024-11-20 09:09:47.932687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:31.898 [2024-11-20 09:09:47.932694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.159 [2024-11-20 09:09:47.945015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.159 [2024-11-20 09:09:47.945415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.159 [2024-11-20 09:09:47.945458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.159 [2024-11-20 09:09:47.945480] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.159 [2024-11-20 09:09:47.946069] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.159 [2024-11-20 09:09:47.946536] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.159 [2024-11-20 09:09:47.946544] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.159 [2024-11-20 09:09:47.946551] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.159 [2024-11-20 09:09:47.946557] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.159 [2024-11-20 09:09:47.958080] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.159 [2024-11-20 09:09:47.958382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.159 [2024-11-20 09:09:47.958426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.159 [2024-11-20 09:09:47.958450] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.159 [2024-11-20 09:09:47.959039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.159 [2024-11-20 09:09:47.959304] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.159 [2024-11-20 09:09:47.959311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.159 [2024-11-20 09:09:47.959318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.159 [2024-11-20 09:09:47.959324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.159 [2024-11-20 09:09:47.971166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.159 [2024-11-20 09:09:47.971435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.159 [2024-11-20 09:09:47.971451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.159 [2024-11-20 09:09:47.971461] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.159 [2024-11-20 09:09:47.971632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.159 [2024-11-20 09:09:47.971802] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.159 [2024-11-20 09:09:47.971810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.159 [2024-11-20 09:09:47.971817] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.159 [2024-11-20 09:09:47.971823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.159 [2024-11-20 09:09:47.984139] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.159 [2024-11-20 09:09:47.984500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.159 [2024-11-20 09:09:47.984545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.159 [2024-11-20 09:09:47.984568] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.159 [2024-11-20 09:09:47.985158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.159 [2024-11-20 09:09:47.985384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.159 [2024-11-20 09:09:47.985392] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.159 [2024-11-20 09:09:47.985398] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.159 [2024-11-20 09:09:47.985404] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.159 [2024-11-20 09:09:47.997110] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.159 [2024-11-20 09:09:47.997477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.159 [2024-11-20 09:09:47.997493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.159 [2024-11-20 09:09:47.997500] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.159 [2024-11-20 09:09:47.997671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.159 [2024-11-20 09:09:47.997846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.159 [2024-11-20 09:09:47.997854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.159 [2024-11-20 09:09:47.997860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.159 [2024-11-20 09:09:47.997866] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.159 [2024-11-20 09:09:48.010111] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.159 [2024-11-20 09:09:48.010464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.159 [2024-11-20 09:09:48.010480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.159 [2024-11-20 09:09:48.010487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.159 [2024-11-20 09:09:48.010657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.159 [2024-11-20 09:09:48.010828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.159 [2024-11-20 09:09:48.010840] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.159 [2024-11-20 09:09:48.010846] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.159 [2024-11-20 09:09:48.010853] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.159 9462.33 IOPS, 36.96 MiB/s [2024-11-20T08:09:48.200Z] [2024-11-20 09:09:48.023007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.159 [2024-11-20 09:09:48.023384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.159 [2024-11-20 09:09:48.023401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.159 [2024-11-20 09:09:48.023408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.159 [2024-11-20 09:09:48.023580] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.159 [2024-11-20 09:09:48.023751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.159 [2024-11-20 09:09:48.023759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.159 [2024-11-20 09:09:48.023766] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.159 [2024-11-20 09:09:48.023773] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.159 [2024-11-20 09:09:48.035864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.159 [2024-11-20 09:09:48.036146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.159 [2024-11-20 09:09:48.036163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.159 [2024-11-20 09:09:48.036171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.159 [2024-11-20 09:09:48.036344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.159 [2024-11-20 09:09:48.036515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.159 [2024-11-20 09:09:48.036524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.159 [2024-11-20 09:09:48.036530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.159 [2024-11-20 09:09:48.036539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.159 [2024-11-20 09:09:48.048804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.159 [2024-11-20 09:09:48.049207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.159 [2024-11-20 09:09:48.049253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.159 [2024-11-20 09:09:48.049275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.159 [2024-11-20 09:09:48.049736] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.159 [2024-11-20 09:09:48.049898] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.159 [2024-11-20 09:09:48.049906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.159 [2024-11-20 09:09:48.049916] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.159 [2024-11-20 09:09:48.049922] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.159 [2024-11-20 09:09:48.061770] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.159 [2024-11-20 09:09:48.062058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.159 [2024-11-20 09:09:48.062075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.159 [2024-11-20 09:09:48.062082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.159 [2024-11-20 09:09:48.062253] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.159 [2024-11-20 09:09:48.062423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.159 [2024-11-20 09:09:48.062431] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.159 [2024-11-20 09:09:48.062438] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.159 [2024-11-20 09:09:48.062444] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.159 [2024-11-20 09:09:48.074635] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.160 [2024-11-20 09:09:48.075002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.160 [2024-11-20 09:09:48.075018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.160 [2024-11-20 09:09:48.075026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.160 [2024-11-20 09:09:48.075197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.160 [2024-11-20 09:09:48.075367] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.160 [2024-11-20 09:09:48.075376] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.160 [2024-11-20 09:09:48.075382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.160 [2024-11-20 09:09:48.075389] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.160 [2024-11-20 09:09:48.087742] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.160 [2024-11-20 09:09:48.088116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.160 [2024-11-20 09:09:48.088133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.160 [2024-11-20 09:09:48.088141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.160 [2024-11-20 09:09:48.088311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.160 [2024-11-20 09:09:48.088483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.160 [2024-11-20 09:09:48.088491] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.160 [2024-11-20 09:09:48.088497] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.160 [2024-11-20 09:09:48.088504] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.160 [2024-11-20 09:09:48.100896] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.160 [2024-11-20 09:09:48.101204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.160 [2024-11-20 09:09:48.101220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.160 [2024-11-20 09:09:48.101228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.160 [2024-11-20 09:09:48.101403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.160 [2024-11-20 09:09:48.101580] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.160 [2024-11-20 09:09:48.101589] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.160 [2024-11-20 09:09:48.101595] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.160 [2024-11-20 09:09:48.101601] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.160 [2024-11-20 09:09:48.113778] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.160 [2024-11-20 09:09:48.114149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.160 [2024-11-20 09:09:48.114166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.160 [2024-11-20 09:09:48.114174] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.160 [2024-11-20 09:09:48.114349] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.160 [2024-11-20 09:09:48.114526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.160 [2024-11-20 09:09:48.114535] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.160 [2024-11-20 09:09:48.114542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.160 [2024-11-20 09:09:48.114549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.160 [2024-11-20 09:09:48.126801] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.160 [2024-11-20 09:09:48.127147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.160 [2024-11-20 09:09:48.127163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.160 [2024-11-20 09:09:48.127171] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.160 [2024-11-20 09:09:48.127341] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.160 [2024-11-20 09:09:48.127514] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.160 [2024-11-20 09:09:48.127522] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.160 [2024-11-20 09:09:48.127528] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.160 [2024-11-20 09:09:48.127535] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.160 [2024-11-20 09:09:48.139663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.160 [2024-11-20 09:09:48.140044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.160 [2024-11-20 09:09:48.140061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.160 [2024-11-20 09:09:48.140073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.160 [2024-11-20 09:09:48.140254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.160 [2024-11-20 09:09:48.140417] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.160 [2024-11-20 09:09:48.140425] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.160 [2024-11-20 09:09:48.140431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.160 [2024-11-20 09:09:48.140436] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.160 [2024-11-20 09:09:48.152662] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.160 [2024-11-20 09:09:48.153081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.160 [2024-11-20 09:09:48.153126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.160 [2024-11-20 09:09:48.153149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.160 [2024-11-20 09:09:48.153726] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.160 [2024-11-20 09:09:48.153925] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.160 [2024-11-20 09:09:48.153933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.160 [2024-11-20 09:09:48.153939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.160 [2024-11-20 09:09:48.153945] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.160 [2024-11-20 09:09:48.165628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.160 [2024-11-20 09:09:48.166006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.160 [2024-11-20 09:09:48.166023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.160 [2024-11-20 09:09:48.166031] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.160 [2024-11-20 09:09:48.166202] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.160 [2024-11-20 09:09:48.166373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.160 [2024-11-20 09:09:48.166381] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.160 [2024-11-20 09:09:48.166388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.160 [2024-11-20 09:09:48.166394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.160 [2024-11-20 09:09:48.178580] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.160 [2024-11-20 09:09:48.178981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.160 [2024-11-20 09:09:48.179026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.160 [2024-11-20 09:09:48.179049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.160 [2024-11-20 09:09:48.179629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.160 [2024-11-20 09:09:48.180049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.160 [2024-11-20 09:09:48.180059] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.160 [2024-11-20 09:09:48.180065] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.160 [2024-11-20 09:09:48.180071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.160 [2024-11-20 09:09:48.191605] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.160 [2024-11-20 09:09:48.192001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.160 [2024-11-20 09:09:48.192018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.160 [2024-11-20 09:09:48.192026] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.160 [2024-11-20 09:09:48.192203] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.160 [2024-11-20 09:09:48.192380] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.160 [2024-11-20 09:09:48.192388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.160 [2024-11-20 09:09:48.192394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.161 [2024-11-20 09:09:48.192401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.421 [2024-11-20 09:09:48.204621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.421 [2024-11-20 09:09:48.205051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.421 [2024-11-20 09:09:48.205068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.421 [2024-11-20 09:09:48.205075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.421 [2024-11-20 09:09:48.205251] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.421 [2024-11-20 09:09:48.205413] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.421 [2024-11-20 09:09:48.205421] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.421 [2024-11-20 09:09:48.205427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.421 [2024-11-20 09:09:48.205433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.421 [2024-11-20 09:09:48.217539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.421 [2024-11-20 09:09:48.217967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.421 [2024-11-20 09:09:48.217984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.421 [2024-11-20 09:09:48.217992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.421 [2024-11-20 09:09:48.218163] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.421 [2024-11-20 09:09:48.218334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.421 [2024-11-20 09:09:48.218342] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.421 [2024-11-20 09:09:48.218352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.422 [2024-11-20 09:09:48.218359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.422 [2024-11-20 09:09:48.230447] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.422 [2024-11-20 09:09:48.230864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.422 [2024-11-20 09:09:48.230879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.422 [2024-11-20 09:09:48.230887] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.422 [2024-11-20 09:09:48.231063] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.422 [2024-11-20 09:09:48.231235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.422 [2024-11-20 09:09:48.231243] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.422 [2024-11-20 09:09:48.231249] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.422 [2024-11-20 09:09:48.231255] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.422 [2024-11-20 09:09:48.243402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.422 [2024-11-20 09:09:48.243806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.422 [2024-11-20 09:09:48.243849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.422 [2024-11-20 09:09:48.243872] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.422 [2024-11-20 09:09:48.244321] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.422 [2024-11-20 09:09:48.244494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.422 [2024-11-20 09:09:48.244502] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.422 [2024-11-20 09:09:48.244509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.422 [2024-11-20 09:09:48.244515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.422 [2024-11-20 09:09:48.256311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.422 [2024-11-20 09:09:48.256701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.422 [2024-11-20 09:09:48.256716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.422 [2024-11-20 09:09:48.256723] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.422 [2024-11-20 09:09:48.256884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.422 [2024-11-20 09:09:48.257072] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.422 [2024-11-20 09:09:48.257081] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.422 [2024-11-20 09:09:48.257088] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.422 [2024-11-20 09:09:48.257094] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.422 [2024-11-20 09:09:48.269195] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.422 [2024-11-20 09:09:48.269592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.422 [2024-11-20 09:09:48.269608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.422 [2024-11-20 09:09:48.269616] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.422 [2024-11-20 09:09:48.269786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.422 [2024-11-20 09:09:48.269963] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.422 [2024-11-20 09:09:48.269972] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.422 [2024-11-20 09:09:48.269978] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.422 [2024-11-20 09:09:48.269984] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.422 [2024-11-20 09:09:48.282099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.422 [2024-11-20 09:09:48.282501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.422 [2024-11-20 09:09:48.282517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.422 [2024-11-20 09:09:48.282524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.422 [2024-11-20 09:09:48.282685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.422 [2024-11-20 09:09:48.282847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.422 [2024-11-20 09:09:48.282855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.422 [2024-11-20 09:09:48.282862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.422 [2024-11-20 09:09:48.282867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.422 [2024-11-20 09:09:48.294959] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.422 [2024-11-20 09:09:48.295341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.422 [2024-11-20 09:09:48.295358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.422 [2024-11-20 09:09:48.295365] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.422 [2024-11-20 09:09:48.295535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.422 [2024-11-20 09:09:48.295706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.422 [2024-11-20 09:09:48.295715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.422 [2024-11-20 09:09:48.295721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.422 [2024-11-20 09:09:48.295727] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.422 [2024-11-20 09:09:48.307883] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.422 [2024-11-20 09:09:48.308284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.422 [2024-11-20 09:09:48.308300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.422 [2024-11-20 09:09:48.308310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.422 [2024-11-20 09:09:48.308471] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.422 [2024-11-20 09:09:48.308633] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.422 [2024-11-20 09:09:48.308641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.422 [2024-11-20 09:09:48.308647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.422 [2024-11-20 09:09:48.308653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.422 [2024-11-20 09:09:48.320787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.422 [2024-11-20 09:09:48.321208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.422 [2024-11-20 09:09:48.321225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.422 [2024-11-20 09:09:48.321232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.422 [2024-11-20 09:09:48.321403] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.422 [2024-11-20 09:09:48.321573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.422 [2024-11-20 09:09:48.321581] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.422 [2024-11-20 09:09:48.321587] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.422 [2024-11-20 09:09:48.321593] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.422 [2024-11-20 09:09:48.333582] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.422 [2024-11-20 09:09:48.333974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.422 [2024-11-20 09:09:48.333990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.422 [2024-11-20 09:09:48.333996] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.422 [2024-11-20 09:09:48.334157] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.422 [2024-11-20 09:09:48.334319] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.422 [2024-11-20 09:09:48.334326] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.422 [2024-11-20 09:09:48.334332] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.422 [2024-11-20 09:09:48.334338] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.422 [2024-11-20 09:09:48.346410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.422 [2024-11-20 09:09:48.346819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.422 [2024-11-20 09:09:48.346836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.422 [2024-11-20 09:09:48.346843] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.422 [2024-11-20 09:09:48.347020] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.422 [2024-11-20 09:09:48.347195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.423 [2024-11-20 09:09:48.347203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.423 [2024-11-20 09:09:48.347210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.423 [2024-11-20 09:09:48.347217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.423 [2024-11-20 09:09:48.359526] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.423 [2024-11-20 09:09:48.359960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.423 [2024-11-20 09:09:48.359977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.423 [2024-11-20 09:09:48.359985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.423 [2024-11-20 09:09:48.360161] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.423 [2024-11-20 09:09:48.360362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.423 [2024-11-20 09:09:48.360370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.423 [2024-11-20 09:09:48.360377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.423 [2024-11-20 09:09:48.360385] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.423 [2024-11-20 09:09:48.372443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.423 [2024-11-20 09:09:48.372841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.423 [2024-11-20 09:09:48.372857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.423 [2024-11-20 09:09:48.372865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.423 [2024-11-20 09:09:48.373042] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.423 [2024-11-20 09:09:48.373214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.423 [2024-11-20 09:09:48.373222] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.423 [2024-11-20 09:09:48.373228] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.423 [2024-11-20 09:09:48.373234] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.423 [2024-11-20 09:09:48.385443] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.423 [2024-11-20 09:09:48.385872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.423 [2024-11-20 09:09:48.385888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.423 [2024-11-20 09:09:48.385896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.423 [2024-11-20 09:09:48.386093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.423 [2024-11-20 09:09:48.386271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.423 [2024-11-20 09:09:48.386279] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.423 [2024-11-20 09:09:48.386287] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.423 [2024-11-20 09:09:48.386297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.423 [2024-11-20 09:09:48.398502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.423 [2024-11-20 09:09:48.398902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.423 [2024-11-20 09:09:48.398919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.423 [2024-11-20 09:09:48.398927] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.423 [2024-11-20 09:09:48.399104] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.423 [2024-11-20 09:09:48.399276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.423 [2024-11-20 09:09:48.399285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.423 [2024-11-20 09:09:48.399291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.423 [2024-11-20 09:09:48.399298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.423 [2024-11-20 09:09:48.411432] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.423 [2024-11-20 09:09:48.411817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.423 [2024-11-20 09:09:48.411861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.423 [2024-11-20 09:09:48.411884] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.423 [2024-11-20 09:09:48.412393] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.423 [2024-11-20 09:09:48.412565] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.423 [2024-11-20 09:09:48.412573] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.423 [2024-11-20 09:09:48.412579] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.423 [2024-11-20 09:09:48.412586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.423 [2024-11-20 09:09:48.424316] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.423 [2024-11-20 09:09:48.424711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.423 [2024-11-20 09:09:48.424728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.423 [2024-11-20 09:09:48.424735] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.423 [2024-11-20 09:09:48.424905] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.423 [2024-11-20 09:09:48.425082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.423 [2024-11-20 09:09:48.425091] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.423 [2024-11-20 09:09:48.425098] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.423 [2024-11-20 09:09:48.425104] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.423 [2024-11-20 09:09:48.437321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.423 [2024-11-20 09:09:48.437742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.423 [2024-11-20 09:09:48.437759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.423 [2024-11-20 09:09:48.437766] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.423 [2024-11-20 09:09:48.437937] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.423 [2024-11-20 09:09:48.438113] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.423 [2024-11-20 09:09:48.438123] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.423 [2024-11-20 09:09:48.438129] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.423 [2024-11-20 09:09:48.438135] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.423 [2024-11-20 09:09:48.450278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.423 [2024-11-20 09:09:48.450681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.423 [2024-11-20 09:09:48.450725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.423 [2024-11-20 09:09:48.450748] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.423 [2024-11-20 09:09:48.451340] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.423 [2024-11-20 09:09:48.451778] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.423 [2024-11-20 09:09:48.451785] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.423 [2024-11-20 09:09:48.451792] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.423 [2024-11-20 09:09:48.451798] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.683 [2024-11-20 09:09:48.463371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.683 [2024-11-20 09:09:48.463790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.683 [2024-11-20 09:09:48.463822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.683 [2024-11-20 09:09:48.463830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.683 [2024-11-20 09:09:48.464013] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.683 [2024-11-20 09:09:48.464189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.683 [2024-11-20 09:09:48.464197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.683 [2024-11-20 09:09:48.464204] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.683 [2024-11-20 09:09:48.464210] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.683 [2024-11-20 09:09:48.476209] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.684 [2024-11-20 09:09:48.476635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.684 [2024-11-20 09:09:48.476681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.684 [2024-11-20 09:09:48.476711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.684 [2024-11-20 09:09:48.477300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.684 [2024-11-20 09:09:48.477518] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.684 [2024-11-20 09:09:48.477526] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.684 [2024-11-20 09:09:48.477532] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.684 [2024-11-20 09:09:48.477539] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.684 [2024-11-20 09:09:48.489089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.684 [2024-11-20 09:09:48.489517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.684 [2024-11-20 09:09:48.489534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.684 [2024-11-20 09:09:48.489541] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.684 [2024-11-20 09:09:48.489712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.684 [2024-11-20 09:09:48.489883] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.684 [2024-11-20 09:09:48.489891] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.684 [2024-11-20 09:09:48.489898] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.684 [2024-11-20 09:09:48.489904] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.684 [2024-11-20 09:09:48.501970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.684 [2024-11-20 09:09:48.502385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.684 [2024-11-20 09:09:48.502402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.684 [2024-11-20 09:09:48.502409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.684 [2024-11-20 09:09:48.502579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.684 [2024-11-20 09:09:48.502754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.684 [2024-11-20 09:09:48.502762] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.684 [2024-11-20 09:09:48.502768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.684 [2024-11-20 09:09:48.502775] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.684 [2024-11-20 09:09:48.514938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.684 [2024-11-20 09:09:48.515346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.684 [2024-11-20 09:09:48.515362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.684 [2024-11-20 09:09:48.515370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.684 [2024-11-20 09:09:48.515540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.684 [2024-11-20 09:09:48.515715] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.684 [2024-11-20 09:09:48.515723] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.684 [2024-11-20 09:09:48.515729] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.684 [2024-11-20 09:09:48.515736] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.684 [2024-11-20 09:09:48.527861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.684 [2024-11-20 09:09:48.528195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.684 [2024-11-20 09:09:48.528213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.684 [2024-11-20 09:09:48.528220] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.684 [2024-11-20 09:09:48.528391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.684 [2024-11-20 09:09:48.528562] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.684 [2024-11-20 09:09:48.528570] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.684 [2024-11-20 09:09:48.528577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.684 [2024-11-20 09:09:48.528583] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.684 [2024-11-20 09:09:48.540656] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.684 [2024-11-20 09:09:48.541091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.684 [2024-11-20 09:09:48.541137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.684 [2024-11-20 09:09:48.541159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.684 [2024-11-20 09:09:48.541745] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.684 [2024-11-20 09:09:48.541916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.684 [2024-11-20 09:09:48.541925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.684 [2024-11-20 09:09:48.541932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.684 [2024-11-20 09:09:48.541938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.684 [2024-11-20 09:09:48.553539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.684 [2024-11-20 09:09:48.553929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.684 [2024-11-20 09:09:48.553945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.684 [2024-11-20 09:09:48.553957] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.684 [2024-11-20 09:09:48.554143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.684 [2024-11-20 09:09:48.554314] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.684 [2024-11-20 09:09:48.554322] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.684 [2024-11-20 09:09:48.554329] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.684 [2024-11-20 09:09:48.554338] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.684 [2024-11-20 09:09:48.566397] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.684 [2024-11-20 09:09:48.566815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.684 [2024-11-20 09:09:48.566832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.684 [2024-11-20 09:09:48.566839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.684 [2024-11-20 09:09:48.567016] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.684 [2024-11-20 09:09:48.567188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.684 [2024-11-20 09:09:48.567196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.684 [2024-11-20 09:09:48.567202] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.684 [2024-11-20 09:09:48.567208] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.684 [2024-11-20 09:09:48.579169] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.684 [2024-11-20 09:09:48.579557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.684 [2024-11-20 09:09:48.579572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.684 [2024-11-20 09:09:48.579578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.684 [2024-11-20 09:09:48.579740] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.684 [2024-11-20 09:09:48.579901] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.684 [2024-11-20 09:09:48.579908] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.684 [2024-11-20 09:09:48.579914] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.684 [2024-11-20 09:09:48.579920] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.684 [2024-11-20 09:09:48.592136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.684 [2024-11-20 09:09:48.592585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.684 [2024-11-20 09:09:48.592635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.684 [2024-11-20 09:09:48.592659] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.684 [2024-11-20 09:09:48.593130] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.684 [2024-11-20 09:09:48.593303] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.684 [2024-11-20 09:09:48.593311] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.685 [2024-11-20 09:09:48.593318] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.685 [2024-11-20 09:09:48.593324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.685 [2024-11-20 09:09:48.605051] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.685 [2024-11-20 09:09:48.605497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.685 [2024-11-20 09:09:48.605514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.685 [2024-11-20 09:09:48.605522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.685 [2024-11-20 09:09:48.605685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.685 [2024-11-20 09:09:48.605847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.685 [2024-11-20 09:09:48.605855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.685 [2024-11-20 09:09:48.605861] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.685 [2024-11-20 09:09:48.605867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.685 [2024-11-20 09:09:48.618198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.685 [2024-11-20 09:09:48.618628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.685 [2024-11-20 09:09:48.618669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.685 [2024-11-20 09:09:48.618694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.685 [2024-11-20 09:09:48.619283] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.685 [2024-11-20 09:09:48.619863] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.685 [2024-11-20 09:09:48.619888] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.685 [2024-11-20 09:09:48.619908] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.685 [2024-11-20 09:09:48.619928] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.685 [2024-11-20 09:09:48.631066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.685 [2024-11-20 09:09:48.631442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.685 [2024-11-20 09:09:48.631486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.685 [2024-11-20 09:09:48.631510] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.685 [2024-11-20 09:09:48.632000] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.685 [2024-11-20 09:09:48.632172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.685 [2024-11-20 09:09:48.632181] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.685 [2024-11-20 09:09:48.632187] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.685 [2024-11-20 09:09:48.632193] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.685 [2024-11-20 09:09:48.643981] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.685 [2024-11-20 09:09:48.644372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.685 [2024-11-20 09:09:48.644387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.685 [2024-11-20 09:09:48.644397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.685 [2024-11-20 09:09:48.644559] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.685 [2024-11-20 09:09:48.644720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.685 [2024-11-20 09:09:48.644727] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.685 [2024-11-20 09:09:48.644733] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.685 [2024-11-20 09:09:48.644739] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.685 [2024-11-20 09:09:48.656890] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.685 [2024-11-20 09:09:48.657272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.685 [2024-11-20 09:09:48.657289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.685 [2024-11-20 09:09:48.657296] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.685 [2024-11-20 09:09:48.657466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.685 [2024-11-20 09:09:48.657638] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.685 [2024-11-20 09:09:48.657646] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.685 [2024-11-20 09:09:48.657653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.685 [2024-11-20 09:09:48.657658] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.685 [2024-11-20 09:09:48.669795] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.685 [2024-11-20 09:09:48.670217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.685 [2024-11-20 09:09:48.670233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.685 [2024-11-20 09:09:48.670240] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.685 [2024-11-20 09:09:48.670411] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.685 [2024-11-20 09:09:48.670582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.685 [2024-11-20 09:09:48.670590] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.685 [2024-11-20 09:09:48.670597] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.685 [2024-11-20 09:09:48.670604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.685 [2024-11-20 09:09:48.682682] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.685 [2024-11-20 09:09:48.683099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.685 [2024-11-20 09:09:48.683116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.685 [2024-11-20 09:09:48.683123] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.685 [2024-11-20 09:09:48.683293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.685 [2024-11-20 09:09:48.683468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.685 [2024-11-20 09:09:48.683476] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.685 [2024-11-20 09:09:48.683482] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.685 [2024-11-20 09:09:48.683488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.685 [2024-11-20 09:09:48.695544] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.685 [2024-11-20 09:09:48.695968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.685 [2024-11-20 09:09:48.696015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.685 [2024-11-20 09:09:48.696038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.685 [2024-11-20 09:09:48.696616] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.685 [2024-11-20 09:09:48.697207] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.685 [2024-11-20 09:09:48.697237] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.685 [2024-11-20 09:09:48.697243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.685 [2024-11-20 09:09:48.697250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.685 [2024-11-20 09:09:48.708330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.685 [2024-11-20 09:09:48.708724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.685 [2024-11-20 09:09:48.708740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.685 [2024-11-20 09:09:48.708747] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.685 [2024-11-20 09:09:48.708908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.685 [2024-11-20 09:09:48.709097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.685 [2024-11-20 09:09:48.709106] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.685 [2024-11-20 09:09:48.709113] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.685 [2024-11-20 09:09:48.709119] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.685 [2024-11-20 09:09:48.721516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.946 [2024-11-20 09:09:48.721924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.946 [2024-11-20 09:09:48.721942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.946 [2024-11-20 09:09:48.721955] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.946 [2024-11-20 09:09:48.722132] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.946 [2024-11-20 09:09:48.722308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.946 [2024-11-20 09:09:48.722317] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.946 [2024-11-20 09:09:48.722323] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.946 [2024-11-20 09:09:48.722336] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.946 [2024-11-20 09:09:48.734368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.946 [2024-11-20 09:09:48.734758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.946 [2024-11-20 09:09:48.734775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.946 [2024-11-20 09:09:48.734782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.946 [2024-11-20 09:09:48.734943] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.946 [2024-11-20 09:09:48.735136] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.946 [2024-11-20 09:09:48.735144] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.946 [2024-11-20 09:09:48.735151] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.946 [2024-11-20 09:09:48.735157] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.946 [2024-11-20 09:09:48.747211] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.946 [2024-11-20 09:09:48.747649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.946 [2024-11-20 09:09:48.747692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.946 [2024-11-20 09:09:48.747715] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.946 [2024-11-20 09:09:48.748120] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.946 [2024-11-20 09:09:48.748292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.946 [2024-11-20 09:09:48.748300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.946 [2024-11-20 09:09:48.748306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.946 [2024-11-20 09:09:48.748313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.946 [2024-11-20 09:09:48.760116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.946 [2024-11-20 09:09:48.760528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.946 [2024-11-20 09:09:48.760545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.946 [2024-11-20 09:09:48.760552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.947 [2024-11-20 09:09:48.760723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.947 [2024-11-20 09:09:48.760893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.947 [2024-11-20 09:09:48.760901] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.947 [2024-11-20 09:09:48.760907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.947 [2024-11-20 09:09:48.760914] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.947 [2024-11-20 09:09:48.773202] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.947 [2024-11-20 09:09:48.773617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.947 [2024-11-20 09:09:48.773633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.947 [2024-11-20 09:09:48.773640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.947 [2024-11-20 09:09:48.773810] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.947 [2024-11-20 09:09:48.774004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.947 [2024-11-20 09:09:48.774013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.947 [2024-11-20 09:09:48.774019] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.947 [2024-11-20 09:09:48.774026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.947 [2024-11-20 09:09:48.786119] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.947 [2024-11-20 09:09:48.786514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.947 [2024-11-20 09:09:48.786531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.947 [2024-11-20 09:09:48.786538] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.947 [2024-11-20 09:09:48.786708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.947 [2024-11-20 09:09:48.786879] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.947 [2024-11-20 09:09:48.786887] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.947 [2024-11-20 09:09:48.786893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.947 [2024-11-20 09:09:48.786899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.947 [2024-11-20 09:09:48.799137] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.947 [2024-11-20 09:09:48.799494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.947 [2024-11-20 09:09:48.799537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.947 [2024-11-20 09:09:48.799560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.947 [2024-11-20 09:09:48.800151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.947 [2024-11-20 09:09:48.800449] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.947 [2024-11-20 09:09:48.800456] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.947 [2024-11-20 09:09:48.800463] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.947 [2024-11-20 09:09:48.800469] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.947 [2024-11-20 09:09:48.811929] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.947 [2024-11-20 09:09:48.812268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.947 [2024-11-20 09:09:48.812284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.947 [2024-11-20 09:09:48.812295] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.947 [2024-11-20 09:09:48.812456] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.947 [2024-11-20 09:09:48.812618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.947 [2024-11-20 09:09:48.812626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.947 [2024-11-20 09:09:48.812632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.947 [2024-11-20 09:09:48.812638] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.947 [2024-11-20 09:09:48.824840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.947 [2024-11-20 09:09:48.825255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.947 [2024-11-20 09:09:48.825271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.947 [2024-11-20 09:09:48.825278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.947 [2024-11-20 09:09:48.825449] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.947 [2024-11-20 09:09:48.825621] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.947 [2024-11-20 09:09:48.825629] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.947 [2024-11-20 09:09:48.825635] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.947 [2024-11-20 09:09:48.825641] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.947 [2024-11-20 09:09:48.837713] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.947 [2024-11-20 09:09:48.838101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.947 [2024-11-20 09:09:48.838138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.947 [2024-11-20 09:09:48.838163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.947 [2024-11-20 09:09:48.838693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.947 [2024-11-20 09:09:48.838855] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.947 [2024-11-20 09:09:48.838863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.947 [2024-11-20 09:09:48.838869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.947 [2024-11-20 09:09:48.838874] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.947 [2024-11-20 09:09:48.850490] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.947 [2024-11-20 09:09:48.850808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.947 [2024-11-20 09:09:48.850824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.947 [2024-11-20 09:09:48.850831] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.947 [2024-11-20 09:09:48.851015] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.947 [2024-11-20 09:09:48.851190] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.947 [2024-11-20 09:09:48.851198] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.947 [2024-11-20 09:09:48.851205] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.947 [2024-11-20 09:09:48.851211] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.947 [2024-11-20 09:09:48.863269] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.947 [2024-11-20 09:09:48.863663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.947 [2024-11-20 09:09:48.863679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.947 [2024-11-20 09:09:48.863686] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.947 [2024-11-20 09:09:48.863847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.947 [2024-11-20 09:09:48.864034] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.947 [2024-11-20 09:09:48.864043] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.947 [2024-11-20 09:09:48.864050] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.947 [2024-11-20 09:09:48.864056] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.947 [2024-11-20 09:09:48.876429] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.947 [2024-11-20 09:09:48.876913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.947 [2024-11-20 09:09:48.876970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.947 [2024-11-20 09:09:48.876995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.947 [2024-11-20 09:09:48.877557] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.947 [2024-11-20 09:09:48.877742] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.947 [2024-11-20 09:09:48.877750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.947 [2024-11-20 09:09:48.877756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.947 [2024-11-20 09:09:48.877762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.947 [2024-11-20 09:09:48.889505] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.947 [2024-11-20 09:09:48.889925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.948 [2024-11-20 09:09:48.889941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.948 [2024-11-20 09:09:48.889952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.948 [2024-11-20 09:09:48.890123] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.948 [2024-11-20 09:09:48.890293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.948 [2024-11-20 09:09:48.890301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.948 [2024-11-20 09:09:48.890307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.948 [2024-11-20 09:09:48.890317] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.948 [2024-11-20 09:09:48.902391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.948 [2024-11-20 09:09:48.902806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.948 [2024-11-20 09:09:48.902822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.948 [2024-11-20 09:09:48.902830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.948 [2024-11-20 09:09:48.903024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.948 [2024-11-20 09:09:48.903200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.948 [2024-11-20 09:09:48.903219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.948 [2024-11-20 09:09:48.903225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.948 [2024-11-20 09:09:48.903232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.948 [2024-11-20 09:09:48.915290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.948 [2024-11-20 09:09:48.915617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.948 [2024-11-20 09:09:48.915633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.948 [2024-11-20 09:09:48.915640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.948 [2024-11-20 09:09:48.915811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.948 [2024-11-20 09:09:48.915988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.948 [2024-11-20 09:09:48.915997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.948 [2024-11-20 09:09:48.916003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.948 [2024-11-20 09:09:48.916010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.948 [2024-11-20 09:09:48.928116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.948 [2024-11-20 09:09:48.928506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.948 [2024-11-20 09:09:48.928522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.948 [2024-11-20 09:09:48.928529] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.948 [2024-11-20 09:09:48.928690] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.948 [2024-11-20 09:09:48.928852] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.948 [2024-11-20 09:09:48.928859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.948 [2024-11-20 09:09:48.928865] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.948 [2024-11-20 09:09:48.928871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.948 [2024-11-20 09:09:48.941002] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.948 [2024-11-20 09:09:48.941427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.948 [2024-11-20 09:09:48.941471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.948 [2024-11-20 09:09:48.941494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.948 [2024-11-20 09:09:48.941953] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.948 [2024-11-20 09:09:48.942126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.948 [2024-11-20 09:09:48.942134] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.948 [2024-11-20 09:09:48.942140] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.948 [2024-11-20 09:09:48.942147] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.948 [2024-11-20 09:09:48.953802] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.948 [2024-11-20 09:09:48.954212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.948 [2024-11-20 09:09:48.954229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.948 [2024-11-20 09:09:48.954237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.948 [2024-11-20 09:09:48.954407] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.948 [2024-11-20 09:09:48.954579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.948 [2024-11-20 09:09:48.954587] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.948 [2024-11-20 09:09:48.954593] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.948 [2024-11-20 09:09:48.954599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.948 [2024-11-20 09:09:48.966667] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.948 [2024-11-20 09:09:48.967090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.948 [2024-11-20 09:09:48.967136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.948 [2024-11-20 09:09:48.967159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.948 [2024-11-20 09:09:48.967735] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.948 [2024-11-20 09:09:48.968131] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.948 [2024-11-20 09:09:48.968140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.948 [2024-11-20 09:09:48.968146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.948 [2024-11-20 09:09:48.968152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:32.948 [2024-11-20 09:09:48.979563] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:32.948 [2024-11-20 09:09:48.979967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:32.948 [2024-11-20 09:09:48.980012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:32.948 [2024-11-20 09:09:48.980042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:32.948 [2024-11-20 09:09:48.980619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:32.948 [2024-11-20 09:09:48.980827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:32.948 [2024-11-20 09:09:48.980835] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:32.948 [2024-11-20 09:09:48.980841] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:32.948 [2024-11-20 09:09:48.980848] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.209 [2024-11-20 09:09:48.992583] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.209 [2024-11-20 09:09:48.992998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.209 [2024-11-20 09:09:48.993016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.209 [2024-11-20 09:09:48.993023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.209 [2024-11-20 09:09:48.993609] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.209 [2024-11-20 09:09:48.994143] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.209 [2024-11-20 09:09:48.994152] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.209 [2024-11-20 09:09:48.994158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.209 [2024-11-20 09:09:48.994165] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.209 [2024-11-20 09:09:49.005523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.209 [2024-11-20 09:09:49.005935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.209 [2024-11-20 09:09:49.005992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.209 [2024-11-20 09:09:49.006016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.209 [2024-11-20 09:09:49.006594] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.209 [2024-11-20 09:09:49.007296] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.209 [2024-11-20 09:09:49.007319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.209 [2024-11-20 09:09:49.007345] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.209 [2024-11-20 09:09:49.007354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.209 7096.75 IOPS, 27.72 MiB/s [2024-11-20T08:09:49.250Z] [2024-11-20 09:09:49.019982] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.209 [2024-11-20 09:09:49.020414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.209 [2024-11-20 09:09:49.020432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.209 [2024-11-20 09:09:49.020441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.209 [2024-11-20 09:09:49.020617] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.209 [2024-11-20 09:09:49.020799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.209 [2024-11-20 09:09:49.020807] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.209 [2024-11-20 09:09:49.020814] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.209 [2024-11-20 09:09:49.020821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.209 [2024-11-20 09:09:49.033046] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.209 [2024-11-20 09:09:49.033480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.209 [2024-11-20 09:09:49.033498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.209 [2024-11-20 09:09:49.033505] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.209 [2024-11-20 09:09:49.033682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.209 [2024-11-20 09:09:49.033859] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.209 [2024-11-20 09:09:49.033867] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.209 [2024-11-20 09:09:49.033874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.209 [2024-11-20 09:09:49.033881] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.209 [2024-11-20 09:09:49.046097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.209 [2024-11-20 09:09:49.046560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.209 [2024-11-20 09:09:49.046604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.209 [2024-11-20 09:09:49.046628] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.209 [2024-11-20 09:09:49.047173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.209 [2024-11-20 09:09:49.047351] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.209 [2024-11-20 09:09:49.047359] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.210 [2024-11-20 09:09:49.047366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.210 [2024-11-20 09:09:49.047372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.210 [2024-11-20 09:09:49.059184] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.210 [2024-11-20 09:09:49.059602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.210 [2024-11-20 09:09:49.059618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.210 [2024-11-20 09:09:49.059625] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.210 [2024-11-20 09:09:49.059787] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.210 [2024-11-20 09:09:49.059954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.210 [2024-11-20 09:09:49.059963] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.210 [2024-11-20 09:09:49.059972] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.210 [2024-11-20 09:09:49.059995] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.210 [2024-11-20 09:09:49.072102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.210 [2024-11-20 09:09:49.072573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.210 [2024-11-20 09:09:49.072618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.210 [2024-11-20 09:09:49.072641] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.210 [2024-11-20 09:09:49.073156] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.210 [2024-11-20 09:09:49.073348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.210 [2024-11-20 09:09:49.073357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.210 [2024-11-20 09:09:49.073363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.210 [2024-11-20 09:09:49.073370] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.210 [2024-11-20 09:09:49.085019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.210 [2024-11-20 09:09:49.085469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.210 [2024-11-20 09:09:49.085486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.210 [2024-11-20 09:09:49.085493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.210 [2024-11-20 09:09:49.085677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.210 [2024-11-20 09:09:49.085849] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.210 [2024-11-20 09:09:49.085857] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.210 [2024-11-20 09:09:49.085863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.210 [2024-11-20 09:09:49.085869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.210 [2024-11-20 09:09:49.098019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.210 [2024-11-20 09:09:49.098485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.210 [2024-11-20 09:09:49.098502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.210 [2024-11-20 09:09:49.098509] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.210 [2024-11-20 09:09:49.098685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.210 [2024-11-20 09:09:49.098862] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.210 [2024-11-20 09:09:49.098871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.210 [2024-11-20 09:09:49.098877] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.210 [2024-11-20 09:09:49.098883] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.210 [2024-11-20 09:09:49.111039] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.210 [2024-11-20 09:09:49.111481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.210 [2024-11-20 09:09:49.111497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.210 [2024-11-20 09:09:49.111504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.210 [2024-11-20 09:09:49.111665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.210 [2024-11-20 09:09:49.111827] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.210 [2024-11-20 09:09:49.111834] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.210 [2024-11-20 09:09:49.111840] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.210 [2024-11-20 09:09:49.111847] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.210 [2024-11-20 09:09:49.124143] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.210 [2024-11-20 09:09:49.124494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.210 [2024-11-20 09:09:49.124511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.210 [2024-11-20 09:09:49.124519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.210 [2024-11-20 09:09:49.124705] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.210 [2024-11-20 09:09:49.124876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.210 [2024-11-20 09:09:49.124885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.210 [2024-11-20 09:09:49.124892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.210 [2024-11-20 09:09:49.124898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.210 [2024-11-20 09:09:49.137323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.210 [2024-11-20 09:09:49.137682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.210 [2024-11-20 09:09:49.137698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.210 [2024-11-20 09:09:49.137706] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.210 [2024-11-20 09:09:49.137883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.210 [2024-11-20 09:09:49.138066] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.210 [2024-11-20 09:09:49.138076] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.210 [2024-11-20 09:09:49.138084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.210 [2024-11-20 09:09:49.138091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.210 [2024-11-20 09:09:49.150398] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.210 [2024-11-20 09:09:49.150825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.210 [2024-11-20 09:09:49.150841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.210 [2024-11-20 09:09:49.150852] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.210 [2024-11-20 09:09:49.151028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.210 [2024-11-20 09:09:49.151200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.210 [2024-11-20 09:09:49.151208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.210 [2024-11-20 09:09:49.151214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.210 [2024-11-20 09:09:49.151220] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.210 [2024-11-20 09:09:49.163538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.210 [2024-11-20 09:09:49.163973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.210 [2024-11-20 09:09:49.163990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.210 [2024-11-20 09:09:49.163997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.210 [2024-11-20 09:09:49.164168] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.210 [2024-11-20 09:09:49.164342] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.210 [2024-11-20 09:09:49.164350] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.210 [2024-11-20 09:09:49.164357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.210 [2024-11-20 09:09:49.164363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.210 [2024-11-20 09:09:49.176666] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.210 [2024-11-20 09:09:49.177085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.210 [2024-11-20 09:09:49.177102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.210 [2024-11-20 09:09:49.177110] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.210 [2024-11-20 09:09:49.177296] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.210 [2024-11-20 09:09:49.177468] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.211 [2024-11-20 09:09:49.177477] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.211 [2024-11-20 09:09:49.177485] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.211 [2024-11-20 09:09:49.177492] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.211 [2024-11-20 09:09:49.189752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.211 [2024-11-20 09:09:49.190130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.211 [2024-11-20 09:09:49.190147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.211 [2024-11-20 09:09:49.190154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.211 [2024-11-20 09:09:49.190330] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.211 [2024-11-20 09:09:49.190513] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.211 [2024-11-20 09:09:49.190521] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.211 [2024-11-20 09:09:49.190527] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.211 [2024-11-20 09:09:49.190534] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.211 [2024-11-20 09:09:49.202970] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.211 [2024-11-20 09:09:49.203386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.211 [2024-11-20 09:09:49.203402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.211 [2024-11-20 09:09:49.203409] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.211 [2024-11-20 09:09:49.203586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.211 [2024-11-20 09:09:49.203762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.211 [2024-11-20 09:09:49.203771] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.211 [2024-11-20 09:09:49.203777] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.211 [2024-11-20 09:09:49.203784] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.211 [2024-11-20 09:09:49.216167] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.211 [2024-11-20 09:09:49.216584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.211 [2024-11-20 09:09:49.216601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.211 [2024-11-20 09:09:49.216609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.211 [2024-11-20 09:09:49.216784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.211 [2024-11-20 09:09:49.216993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.211 [2024-11-20 09:09:49.217013] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.211 [2024-11-20 09:09:49.217021] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.211 [2024-11-20 09:09:49.217027] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.211 [2024-11-20 09:09:49.229231] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.211 [2024-11-20 09:09:49.229506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.211 [2024-11-20 09:09:49.229523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.211 [2024-11-20 09:09:49.229531] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.211 [2024-11-20 09:09:49.229708] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.211 [2024-11-20 09:09:49.229886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.211 [2024-11-20 09:09:49.229894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.211 [2024-11-20 09:09:49.229905] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.211 [2024-11-20 09:09:49.229911] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.211 [2024-11-20 09:09:49.242270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.211 [2024-11-20 09:09:49.242679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.211 [2024-11-20 09:09:49.242696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.211 [2024-11-20 09:09:49.242703] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.211 [2024-11-20 09:09:49.242879] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.211 [2024-11-20 09:09:49.243060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.211 [2024-11-20 09:09:49.243069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.211 [2024-11-20 09:09:49.243075] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.211 [2024-11-20 09:09:49.243082] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.471 [2024-11-20 09:09:49.255393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.471 [2024-11-20 09:09:49.255691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.471 [2024-11-20 09:09:49.255709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.471 [2024-11-20 09:09:49.255716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.471 [2024-11-20 09:09:49.255891] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.471 [2024-11-20 09:09:49.256074] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.471 [2024-11-20 09:09:49.256083] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.471 [2024-11-20 09:09:49.256090] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.471 [2024-11-20 09:09:49.256096] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.471 [2024-11-20 09:09:49.268489] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.471 [2024-11-20 09:09:49.268821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.471 [2024-11-20 09:09:49.268839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.471 [2024-11-20 09:09:49.268846] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.471 [2024-11-20 09:09:49.269027] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.471 [2024-11-20 09:09:49.269204] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.471 [2024-11-20 09:09:49.269212] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.471 [2024-11-20 09:09:49.269219] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.471 [2024-11-20 09:09:49.269225] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.471 [2024-11-20 09:09:49.281551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.471 [2024-11-20 09:09:49.281872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.471 [2024-11-20 09:09:49.281888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.471 [2024-11-20 09:09:49.281896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.471 [2024-11-20 09:09:49.282076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.471 [2024-11-20 09:09:49.282252] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.471 [2024-11-20 09:09:49.282261] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.471 [2024-11-20 09:09:49.282267] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.471 [2024-11-20 09:09:49.282273] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.471 [2024-11-20 09:09:49.294618] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.471 [2024-11-20 09:09:49.294954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.471 [2024-11-20 09:09:49.294970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.471 [2024-11-20 09:09:49.294978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.471 [2024-11-20 09:09:49.295153] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.471 [2024-11-20 09:09:49.295329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.471 [2024-11-20 09:09:49.295337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.471 [2024-11-20 09:09:49.295344] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.471 [2024-11-20 09:09:49.295350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.471 [2024-11-20 09:09:49.307749] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.471 [2024-11-20 09:09:49.308211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.471 [2024-11-20 09:09:49.308254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.471 [2024-11-20 09:09:49.308276] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.471 [2024-11-20 09:09:49.308709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.471 [2024-11-20 09:09:49.308886] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.472 [2024-11-20 09:09:49.308894] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.472 [2024-11-20 09:09:49.308902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.472 [2024-11-20 09:09:49.308908] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.472 [2024-11-20 09:09:49.320845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.472 [2024-11-20 09:09:49.321221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.472 [2024-11-20 09:09:49.321238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.472 [2024-11-20 09:09:49.321249] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.472 [2024-11-20 09:09:49.321425] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.472 [2024-11-20 09:09:49.321603] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.472 [2024-11-20 09:09:49.321611] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.472 [2024-11-20 09:09:49.321618] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.472 [2024-11-20 09:09:49.321624] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.472 [2024-11-20 09:09:49.333844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.472 [2024-11-20 09:09:49.334195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.472 [2024-11-20 09:09:49.334239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.472 [2024-11-20 09:09:49.334261] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.472 [2024-11-20 09:09:49.334828] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.472 [2024-11-20 09:09:49.335012] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.472 [2024-11-20 09:09:49.335021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.472 [2024-11-20 09:09:49.335027] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.472 [2024-11-20 09:09:49.335033] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.472 [2024-11-20 09:09:49.346869] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.472 [2024-11-20 09:09:49.347221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.472 [2024-11-20 09:09:49.347237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.472 [2024-11-20 09:09:49.347245] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.472 [2024-11-20 09:09:49.347428] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.472 [2024-11-20 09:09:49.347599] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.472 [2024-11-20 09:09:49.347607] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.472 [2024-11-20 09:09:49.347614] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.472 [2024-11-20 09:09:49.347620] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.472 [2024-11-20 09:09:49.359987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.472 [2024-11-20 09:09:49.360269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.472 [2024-11-20 09:09:49.360286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.472 [2024-11-20 09:09:49.360293] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.472 [2024-11-20 09:09:49.360469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.472 [2024-11-20 09:09:49.360651] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.472 [2024-11-20 09:09:49.360659] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.472 [2024-11-20 09:09:49.360666] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.472 [2024-11-20 09:09:49.360672] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.472 [2024-11-20 09:09:49.373030] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.472 [2024-11-20 09:09:49.373326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.472 [2024-11-20 09:09:49.373369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.472 [2024-11-20 09:09:49.373392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.472 [2024-11-20 09:09:49.373885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.472 [2024-11-20 09:09:49.374068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.472 [2024-11-20 09:09:49.374077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.472 [2024-11-20 09:09:49.374084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.472 [2024-11-20 09:09:49.374090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.472 [2024-11-20 09:09:49.386019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.472 [2024-11-20 09:09:49.386340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.472 [2024-11-20 09:09:49.386357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.472 [2024-11-20 09:09:49.386364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.472 [2024-11-20 09:09:49.386535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.472 [2024-11-20 09:09:49.386706] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.472 [2024-11-20 09:09:49.386714] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.472 [2024-11-20 09:09:49.386721] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.472 [2024-11-20 09:09:49.386729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.472 [2024-11-20 09:09:49.399145] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.472 [2024-11-20 09:09:49.399495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.472 [2024-11-20 09:09:49.399512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.472 [2024-11-20 09:09:49.399519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.472 [2024-11-20 09:09:49.399695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.472 [2024-11-20 09:09:49.399877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.472 [2024-11-20 09:09:49.399884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.472 [2024-11-20 09:09:49.399895] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.472 [2024-11-20 09:09:49.399901] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.472 [2024-11-20 09:09:49.412053] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.472 [2024-11-20 09:09:49.412426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.472 [2024-11-20 09:09:49.412443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.472 [2024-11-20 09:09:49.412451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.472 [2024-11-20 09:09:49.412622] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.472 [2024-11-20 09:09:49.412794] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.472 [2024-11-20 09:09:49.412803] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.472 [2024-11-20 09:09:49.412809] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.472 [2024-11-20 09:09:49.412815] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.472 [2024-11-20 09:09:49.425099] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.472 [2024-11-20 09:09:49.425384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.472 [2024-11-20 09:09:49.425400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.472 [2024-11-20 09:09:49.425408] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.472 [2024-11-20 09:09:49.425579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.472 [2024-11-20 09:09:49.425750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.472 [2024-11-20 09:09:49.425758] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.472 [2024-11-20 09:09:49.425764] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.472 [2024-11-20 09:09:49.425770] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.472 [2024-11-20 09:09:49.438132] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.472 [2024-11-20 09:09:49.438553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.472 [2024-11-20 09:09:49.438570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.472 [2024-11-20 09:09:49.438578] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.472 [2024-11-20 09:09:49.438754] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.472 [2024-11-20 09:09:49.438931] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.472 [2024-11-20 09:09:49.438939] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.472 [2024-11-20 09:09:49.438946] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.472 [2024-11-20 09:09:49.438960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.473 [2024-11-20 09:09:49.451270] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.473 [2024-11-20 09:09:49.451658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.473 [2024-11-20 09:09:49.451675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.473 [2024-11-20 09:09:49.451682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.473 [2024-11-20 09:09:49.451858] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.473 [2024-11-20 09:09:49.452041] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.473 [2024-11-20 09:09:49.452050] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.473 [2024-11-20 09:09:49.452057] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.473 [2024-11-20 09:09:49.452063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.473 [2024-11-20 09:09:49.464402] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.473 [2024-11-20 09:09:49.464673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.473 [2024-11-20 09:09:49.464690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.473 [2024-11-20 09:09:49.464698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.473 [2024-11-20 09:09:49.464873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.473 [2024-11-20 09:09:49.465055] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.473 [2024-11-20 09:09:49.465064] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.473 [2024-11-20 09:09:49.465071] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.473 [2024-11-20 09:09:49.465078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.473 [2024-11-20 09:09:49.477552] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.473 [2024-11-20 09:09:49.477909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.473 [2024-11-20 09:09:49.477926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.473 [2024-11-20 09:09:49.477933] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.473 [2024-11-20 09:09:49.478116] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.473 [2024-11-20 09:09:49.478293] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.473 [2024-11-20 09:09:49.478302] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.473 [2024-11-20 09:09:49.478308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.473 [2024-11-20 09:09:49.478315] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.473 [2024-11-20 09:09:49.490616] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.473 [2024-11-20 09:09:49.491048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.473 [2024-11-20 09:09:49.491065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.473 [2024-11-20 09:09:49.491075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.473 [2024-11-20 09:09:49.491252] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.473 [2024-11-20 09:09:49.491427] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.473 [2024-11-20 09:09:49.491435] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.473 [2024-11-20 09:09:49.491442] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.473 [2024-11-20 09:09:49.491448] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.473 [2024-11-20 09:09:49.503766] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.473 [2024-11-20 09:09:49.504135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.473 [2024-11-20 09:09:49.504152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.473 [2024-11-20 09:09:49.504159] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.473 [2024-11-20 09:09:49.504335] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.473 [2024-11-20 09:09:49.504511] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.473 [2024-11-20 09:09:49.504519] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.473 [2024-11-20 09:09:49.504526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.473 [2024-11-20 09:09:49.504532] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.733 [2024-11-20 09:09:49.516891] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.733 [2024-11-20 09:09:49.517274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.733 [2024-11-20 09:09:49.517318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.733 [2024-11-20 09:09:49.517341] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.733 [2024-11-20 09:09:49.517917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.733 [2024-11-20 09:09:49.518379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.733 [2024-11-20 09:09:49.518387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.733 [2024-11-20 09:09:49.518394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.733 [2024-11-20 09:09:49.518401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.733 [2024-11-20 09:09:49.530049] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.733 [2024-11-20 09:09:49.530354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.733 [2024-11-20 09:09:49.530371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.733 [2024-11-20 09:09:49.530378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.733 [2024-11-20 09:09:49.530555] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.733 [2024-11-20 09:09:49.530734] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.733 [2024-11-20 09:09:49.530743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.733 [2024-11-20 09:09:49.530749] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.733 [2024-11-20 09:09:49.530756] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.733 [2024-11-20 09:09:49.543233] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.733 [2024-11-20 09:09:49.543666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.733 [2024-11-20 09:09:49.543683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.733 [2024-11-20 09:09:49.543690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.733 [2024-11-20 09:09:49.543866] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.733 [2024-11-20 09:09:49.544049] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.733 [2024-11-20 09:09:49.544058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.733 [2024-11-20 09:09:49.544064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.733 [2024-11-20 09:09:49.544071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.733 [2024-11-20 09:09:49.556396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.733 [2024-11-20 09:09:49.556846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.733 [2024-11-20 09:09:49.556863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.733 [2024-11-20 09:09:49.556870] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.733 [2024-11-20 09:09:49.557056] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.733 [2024-11-20 09:09:49.557238] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.733 [2024-11-20 09:09:49.557247] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.733 [2024-11-20 09:09:49.557253] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.733 [2024-11-20 09:09:49.557260] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.733 [2024-11-20 09:09:49.569596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.733 [2024-11-20 09:09:49.570013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.733 [2024-11-20 09:09:49.570031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.733 [2024-11-20 09:09:49.570038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.733 [2024-11-20 09:09:49.570214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.733 [2024-11-20 09:09:49.570390] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.733 [2024-11-20 09:09:49.570398] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.733 [2024-11-20 09:09:49.570408] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.733 [2024-11-20 09:09:49.570415] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.733 [2024-11-20 09:09:49.582993] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.733 [2024-11-20 09:09:49.583351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.733 [2024-11-20 09:09:49.583368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.733 [2024-11-20 09:09:49.583375] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.733 [2024-11-20 09:09:49.583552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.733 [2024-11-20 09:09:49.583728] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.733 [2024-11-20 09:09:49.583737] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.733 [2024-11-20 09:09:49.583744] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.733 [2024-11-20 09:09:49.583750] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.733 [2024-11-20 09:09:49.596060] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.733 [2024-11-20 09:09:49.596500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.733 [2024-11-20 09:09:49.596517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.733 [2024-11-20 09:09:49.596524] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.733 [2024-11-20 09:09:49.596701] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.733 [2024-11-20 09:09:49.596877] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.733 [2024-11-20 09:09:49.596886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.733 [2024-11-20 09:09:49.596892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.733 [2024-11-20 09:09:49.596899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.733 [2024-11-20 09:09:49.609216] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.733 [2024-11-20 09:09:49.609648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.733 [2024-11-20 09:09:49.609664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.733 [2024-11-20 09:09:49.609672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.733 [2024-11-20 09:09:49.609848] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.733 [2024-11-20 09:09:49.610030] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.733 [2024-11-20 09:09:49.610039] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.734 [2024-11-20 09:09:49.610045] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.734 [2024-11-20 09:09:49.610051] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.734 [2024-11-20 09:09:49.622313] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.734 [2024-11-20 09:09:49.622679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.734 [2024-11-20 09:09:49.622696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.734 [2024-11-20 09:09:49.622704] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.734 [2024-11-20 09:09:49.622880] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.734 [2024-11-20 09:09:49.623062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.734 [2024-11-20 09:09:49.623071] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.734 [2024-11-20 09:09:49.623078] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.734 [2024-11-20 09:09:49.623085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.734 [2024-11-20 09:09:49.635393] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.734 [2024-11-20 09:09:49.635746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.734 [2024-11-20 09:09:49.635763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.734 [2024-11-20 09:09:49.635770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.734 [2024-11-20 09:09:49.635946] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.734 [2024-11-20 09:09:49.636130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.734 [2024-11-20 09:09:49.636138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.734 [2024-11-20 09:09:49.636145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.734 [2024-11-20 09:09:49.636151] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.734 [2024-11-20 09:09:49.648464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.734 [2024-11-20 09:09:49.648899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.734 [2024-11-20 09:09:49.648916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.734 [2024-11-20 09:09:49.648923] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.734 [2024-11-20 09:09:49.649106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.734 [2024-11-20 09:09:49.649283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.734 [2024-11-20 09:09:49.649292] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.734 [2024-11-20 09:09:49.649299] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.734 [2024-11-20 09:09:49.649305] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.734 [2024-11-20 09:09:49.661615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.734 [2024-11-20 09:09:49.662025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.734 [2024-11-20 09:09:49.662042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.734 [2024-11-20 09:09:49.662053] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.734 [2024-11-20 09:09:49.662229] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.734 [2024-11-20 09:09:49.662406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.734 [2024-11-20 09:09:49.662415] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.734 [2024-11-20 09:09:49.662421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.734 [2024-11-20 09:09:49.662427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.734 [2024-11-20 09:09:49.674665] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.734 [2024-11-20 09:09:49.675111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.734 [2024-11-20 09:09:49.675129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.734 [2024-11-20 09:09:49.675136] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.734 [2024-11-20 09:09:49.675312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.734 [2024-11-20 09:09:49.675488] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.734 [2024-11-20 09:09:49.675496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.734 [2024-11-20 09:09:49.675503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.734 [2024-11-20 09:09:49.675510] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.734 [2024-11-20 09:09:49.687504] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.734 [2024-11-20 09:09:49.687920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.734 [2024-11-20 09:09:49.687936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.734 [2024-11-20 09:09:49.687944] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.734 [2024-11-20 09:09:49.688122] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.734 [2024-11-20 09:09:49.688292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.734 [2024-11-20 09:09:49.688301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.734 [2024-11-20 09:09:49.688307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.734 [2024-11-20 09:09:49.688313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.734 [2024-11-20 09:09:49.700430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.734 [2024-11-20 09:09:49.700874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.734 [2024-11-20 09:09:49.700890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.734 [2024-11-20 09:09:49.700898] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.734 [2024-11-20 09:09:49.701101] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.734 [2024-11-20 09:09:49.701276] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.734 [2024-11-20 09:09:49.701284] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.734 [2024-11-20 09:09:49.701291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.734 [2024-11-20 09:09:49.701297] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.734 [2024-11-20 09:09:49.713309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.734 [2024-11-20 09:09:49.713742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.734 [2024-11-20 09:09:49.713787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.734 [2024-11-20 09:09:49.713810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.734 [2024-11-20 09:09:49.714402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.734 [2024-11-20 09:09:49.714801] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.734 [2024-11-20 09:09:49.714809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.734 [2024-11-20 09:09:49.714816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.734 [2024-11-20 09:09:49.714822] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.735 [2024-11-20 09:09:49.726156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.735 [2024-11-20 09:09:49.726398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.735 [2024-11-20 09:09:49.726413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.735 [2024-11-20 09:09:49.726421] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.735 [2024-11-20 09:09:49.726582] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.735 [2024-11-20 09:09:49.726743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.735 [2024-11-20 09:09:49.726750] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.735 [2024-11-20 09:09:49.726756] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.735 [2024-11-20 09:09:49.726762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.735 [2024-11-20 09:09:49.738991] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.735 [2024-11-20 09:09:49.739448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.735 [2024-11-20 09:09:49.739465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.735 [2024-11-20 09:09:49.739472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.735 [2024-11-20 09:09:49.739643] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.735 [2024-11-20 09:09:49.739815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.735 [2024-11-20 09:09:49.739823] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.735 [2024-11-20 09:09:49.739830] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.735 [2024-11-20 09:09:49.739840] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.735 [2024-11-20 09:09:49.751887] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.735 [2024-11-20 09:09:49.752316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.735 [2024-11-20 09:09:49.752332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.735 [2024-11-20 09:09:49.752339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.735 [2024-11-20 09:09:49.752501] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.735 [2024-11-20 09:09:49.752662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.735 [2024-11-20 09:09:49.752669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.735 [2024-11-20 09:09:49.752675] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.735 [2024-11-20 09:09:49.752681] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.735 [2024-11-20 09:09:49.764664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.735 [2024-11-20 09:09:49.765089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.735 [2024-11-20 09:09:49.765105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.735 [2024-11-20 09:09:49.765112] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.735 [2024-11-20 09:09:49.765273] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.735 [2024-11-20 09:09:49.765435] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.735 [2024-11-20 09:09:49.765443] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.735 [2024-11-20 09:09:49.765449] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.735 [2024-11-20 09:09:49.765455] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.995 [2024-11-20 09:09:49.777739] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.995 [2024-11-20 09:09:49.778087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.995 [2024-11-20 09:09:49.778104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.995 [2024-11-20 09:09:49.778111] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.995 [2024-11-20 09:09:49.778295] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.995 [2024-11-20 09:09:49.778466] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.995 [2024-11-20 09:09:49.778474] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.995 [2024-11-20 09:09:49.778481] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.995 [2024-11-20 09:09:49.778488] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.995 [2024-11-20 09:09:49.790604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.995 [2024-11-20 09:09:49.791021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.995 [2024-11-20 09:09:49.791039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.995 [2024-11-20 09:09:49.791046] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.995 [2024-11-20 09:09:49.791217] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.995 [2024-11-20 09:09:49.791388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.995 [2024-11-20 09:09:49.791396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.995 [2024-11-20 09:09:49.791402] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.995 [2024-11-20 09:09:49.791408] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.995 [2024-11-20 09:09:49.803547] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.995 [2024-11-20 09:09:49.803967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.995 [2024-11-20 09:09:49.803983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.995 [2024-11-20 09:09:49.803989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.995 [2024-11-20 09:09:49.804151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.995 [2024-11-20 09:09:49.804312] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.995 [2024-11-20 09:09:49.804319] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.995 [2024-11-20 09:09:49.804326] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.995 [2024-11-20 09:09:49.804331] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.995 [2024-11-20 09:09:49.816371] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.995 [2024-11-20 09:09:49.816802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.995 [2024-11-20 09:09:49.816819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.995 [2024-11-20 09:09:49.816826] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.995 [2024-11-20 09:09:49.817028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.995 [2024-11-20 09:09:49.817199] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.995 [2024-11-20 09:09:49.817207] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.995 [2024-11-20 09:09:49.817214] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.996 [2024-11-20 09:09:49.817220] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.996 [2024-11-20 09:09:49.829183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.996 [2024-11-20 09:09:49.829509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.996 [2024-11-20 09:09:49.829525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.996 [2024-11-20 09:09:49.829535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.996 [2024-11-20 09:09:49.829697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.996 [2024-11-20 09:09:49.829858] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.996 [2024-11-20 09:09:49.829866] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.996 [2024-11-20 09:09:49.829872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.996 [2024-11-20 09:09:49.829878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.996 [2024-11-20 09:09:49.842050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.996 [2024-11-20 09:09:49.842484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.996 [2024-11-20 09:09:49.842500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.996 [2024-11-20 09:09:49.842507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.996 [2024-11-20 09:09:49.842668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.996 [2024-11-20 09:09:49.842830] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.996 [2024-11-20 09:09:49.842837] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.996 [2024-11-20 09:09:49.842844] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.996 [2024-11-20 09:09:49.842849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.996 [2024-11-20 09:09:49.854838] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.996 [2024-11-20 09:09:49.855281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.996 [2024-11-20 09:09:49.855326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.996 [2024-11-20 09:09:49.855349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.996 [2024-11-20 09:09:49.855839] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.996 [2024-11-20 09:09:49.856014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.996 [2024-11-20 09:09:49.856023] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.996 [2024-11-20 09:09:49.856030] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.996 [2024-11-20 09:09:49.856037] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.996 [2024-11-20 09:09:49.867734] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.996 [2024-11-20 09:09:49.868042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.996 [2024-11-20 09:09:49.868057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.996 [2024-11-20 09:09:49.868064] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.996 [2024-11-20 09:09:49.868225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.996 [2024-11-20 09:09:49.868391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.996 [2024-11-20 09:09:49.868399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.996 [2024-11-20 09:09:49.868405] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.996 [2024-11-20 09:09:49.868411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.996 [2024-11-20 09:09:49.880576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.996 [2024-11-20 09:09:49.880968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.996 [2024-11-20 09:09:49.880985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.996 [2024-11-20 09:09:49.880991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.996 [2024-11-20 09:09:49.881152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.996 [2024-11-20 09:09:49.881313] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.996 [2024-11-20 09:09:49.881321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.996 [2024-11-20 09:09:49.881327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.996 [2024-11-20 09:09:49.881333] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.996 [2024-11-20 09:09:49.893464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.996 [2024-11-20 09:09:49.893815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.996 [2024-11-20 09:09:49.893831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.996 [2024-11-20 09:09:49.893838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.996 [2024-11-20 09:09:49.894023] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.996 [2024-11-20 09:09:49.894194] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.996 [2024-11-20 09:09:49.894203] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.996 [2024-11-20 09:09:49.894210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.996 [2024-11-20 09:09:49.894217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.996 [2024-11-20 09:09:49.906659] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.996 [2024-11-20 09:09:49.907092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.996 [2024-11-20 09:09:49.907110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.996 [2024-11-20 09:09:49.907117] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.996 [2024-11-20 09:09:49.907293] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.996 [2024-11-20 09:09:49.907469] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.996 [2024-11-20 09:09:49.907478] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.996 [2024-11-20 09:09:49.907484] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.996 [2024-11-20 09:09:49.907495] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.996 [2024-11-20 09:09:49.919535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.996 [2024-11-20 09:09:49.919842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.996 [2024-11-20 09:09:49.919858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.996 [2024-11-20 09:09:49.919865] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.996 [2024-11-20 09:09:49.920051] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.996 [2024-11-20 09:09:49.920222] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.996 [2024-11-20 09:09:49.920230] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.996 [2024-11-20 09:09:49.920236] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.996 [2024-11-20 09:09:49.920242] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.996 [2024-11-20 09:09:49.932323] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.997 [2024-11-20 09:09:49.932653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.997 [2024-11-20 09:09:49.932670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.997 [2024-11-20 09:09:49.932676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.997 [2024-11-20 09:09:49.932837] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.997 [2024-11-20 09:09:49.933022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.997 [2024-11-20 09:09:49.933030] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.997 [2024-11-20 09:09:49.933037] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.997 [2024-11-20 09:09:49.933043] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.997 [2024-11-20 09:09:49.945219] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.997 [2024-11-20 09:09:49.945617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.997 [2024-11-20 09:09:49.945633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.997 [2024-11-20 09:09:49.945640] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.997 [2024-11-20 09:09:49.945801] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.997 [2024-11-20 09:09:49.945968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.997 [2024-11-20 09:09:49.945992] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.997 [2024-11-20 09:09:49.945999] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.997 [2024-11-20 09:09:49.946005] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.997 [2024-11-20 09:09:49.958123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.997 [2024-11-20 09:09:49.958568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.997 [2024-11-20 09:09:49.958585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.997 [2024-11-20 09:09:49.958592] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.997 [2024-11-20 09:09:49.958763] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.997 [2024-11-20 09:09:49.958934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.997 [2024-11-20 09:09:49.958942] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.997 [2024-11-20 09:09:49.958955] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.997 [2024-11-20 09:09:49.958961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.997 [2024-11-20 09:09:49.970907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.997 [2024-11-20 09:09:49.971351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.997 [2024-11-20 09:09:49.971395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.997 [2024-11-20 09:09:49.971419] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.997 [2024-11-20 09:09:49.972007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.997 [2024-11-20 09:09:49.972211] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.997 [2024-11-20 09:09:49.972219] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.997 [2024-11-20 09:09:49.972226] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.997 [2024-11-20 09:09:49.972232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.997 [2024-11-20 09:09:49.983774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.997 [2024-11-20 09:09:49.984192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.997 [2024-11-20 09:09:49.984209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.997 [2024-11-20 09:09:49.984216] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.997 [2024-11-20 09:09:49.984387] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.997 [2024-11-20 09:09:49.984557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.997 [2024-11-20 09:09:49.984565] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.997 [2024-11-20 09:09:49.984572] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.997 [2024-11-20 09:09:49.984578] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.997 [2024-11-20 09:09:49.996557] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.997 [2024-11-20 09:09:49.996999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.997 [2024-11-20 09:09:49.997016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.997 [2024-11-20 09:09:49.997028] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.997 [2024-11-20 09:09:49.997199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.997 [2024-11-20 09:09:49.997375] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.997 [2024-11-20 09:09:49.997382] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.997 [2024-11-20 09:09:49.997388] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.997 [2024-11-20 09:09:49.997394] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.997 [2024-11-20 09:09:50.009674] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.997 [2024-11-20 09:09:50.010105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.997 [2024-11-20 09:09:50.010123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.997 [2024-11-20 09:09:50.010130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.997 [2024-11-20 09:09:50.010312] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.997 [2024-11-20 09:09:50.010489] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.997 [2024-11-20 09:09:50.010497] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.997 [2024-11-20 09:09:50.010505] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.997 [2024-11-20 09:09:50.010511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:33.997 5677.40 IOPS, 22.18 MiB/s [2024-11-20T08:09:50.038Z] [2024-11-20 09:09:50.023503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:33.997 [2024-11-20 09:09:50.023885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:33.997 [2024-11-20 09:09:50.023902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:33.997 [2024-11-20 09:09:50.023911] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:33.997 [2024-11-20 09:09:50.024093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:33.997 [2024-11-20 09:09:50.024278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:33.997 [2024-11-20 09:09:50.024286] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:33.997 [2024-11-20 09:09:50.024293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:33.997 [2024-11-20 09:09:50.024299] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.258 [2024-11-20 09:09:50.036709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.258 [2024-11-20 09:09:50.037161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-11-20 09:09:50.037179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.258 [2024-11-20 09:09:50.037186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.258 [2024-11-20 09:09:50.037362] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.258 [2024-11-20 09:09:50.037543] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.258 [2024-11-20 09:09:50.037551] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.258 [2024-11-20 09:09:50.037558] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.258 [2024-11-20 09:09:50.037565] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.258 [2024-11-20 09:09:50.049968] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.258 [2024-11-20 09:09:50.050415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-11-20 09:09:50.050433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.258 [2024-11-20 09:09:50.050441] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.258 [2024-11-20 09:09:50.050619] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.258 [2024-11-20 09:09:50.050797] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.258 [2024-11-20 09:09:50.050805] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.258 [2024-11-20 09:09:50.050812] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.258 [2024-11-20 09:09:50.050819] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.258 [2024-11-20 09:09:50.063055] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.258 [2024-11-20 09:09:50.063464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-11-20 09:09:50.063481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.258 [2024-11-20 09:09:50.063488] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.258 [2024-11-20 09:09:50.063665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.258 [2024-11-20 09:09:50.063841] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.258 [2024-11-20 09:09:50.063850] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.258 [2024-11-20 09:09:50.063856] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.258 [2024-11-20 09:09:50.063863] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.258 [2024-11-20 09:09:50.076196] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.258 [2024-11-20 09:09:50.076608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-11-20 09:09:50.076625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.258 [2024-11-20 09:09:50.076632] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.258 [2024-11-20 09:09:50.076808] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.258 [2024-11-20 09:09:50.076991] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.258 [2024-11-20 09:09:50.077001] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.258 [2024-11-20 09:09:50.077011] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.258 [2024-11-20 09:09:50.077018] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.258 [2024-11-20 09:09:50.089322] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.258 [2024-11-20 09:09:50.089668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-11-20 09:09:50.089685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.258 [2024-11-20 09:09:50.089693] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.258 [2024-11-20 09:09:50.089869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.258 [2024-11-20 09:09:50.090051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.258 [2024-11-20 09:09:50.090060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.258 [2024-11-20 09:09:50.090066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.258 [2024-11-20 09:09:50.090073] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.258 [2024-11-20 09:09:50.102471] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.258 [2024-11-20 09:09:50.102798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-11-20 09:09:50.102816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.258 [2024-11-20 09:09:50.102824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.258 [2024-11-20 09:09:50.103006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.258 [2024-11-20 09:09:50.103183] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.258 [2024-11-20 09:09:50.103192] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.258 [2024-11-20 09:09:50.103198] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.258 [2024-11-20 09:09:50.103205] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.258 [2024-11-20 09:09:50.115532] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.258 [2024-11-20 09:09:50.115893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-11-20 09:09:50.115910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.258 [2024-11-20 09:09:50.115917] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.258 [2024-11-20 09:09:50.116114] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.258 [2024-11-20 09:09:50.116292] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.258 [2024-11-20 09:09:50.116301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.258 [2024-11-20 09:09:50.116308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.258 [2024-11-20 09:09:50.116314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.258 [2024-11-20 09:09:50.128614] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.258 [2024-11-20 09:09:50.129056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.258 [2024-11-20 09:09:50.129109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.259 [2024-11-20 09:09:50.129132] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.259 [2024-11-20 09:09:50.129677] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.259 [2024-11-20 09:09:50.129854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.259 [2024-11-20 09:09:50.129862] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.259 [2024-11-20 09:09:50.129870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.259 [2024-11-20 09:09:50.129876] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.259 [2024-11-20 09:09:50.141683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.259 [2024-11-20 09:09:50.142009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-11-20 09:09:50.142026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.259 [2024-11-20 09:09:50.142033] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.259 [2024-11-20 09:09:50.142219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.259 [2024-11-20 09:09:50.142391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.259 [2024-11-20 09:09:50.142399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.259 [2024-11-20 09:09:50.142406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.259 [2024-11-20 09:09:50.142412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.259 [2024-11-20 09:09:50.154722] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.259 [2024-11-20 09:09:50.155091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-11-20 09:09:50.155109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.259 [2024-11-20 09:09:50.155118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.259 [2024-11-20 09:09:50.155294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.259 [2024-11-20 09:09:50.155472] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.259 [2024-11-20 09:09:50.155480] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.259 [2024-11-20 09:09:50.155487] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.259 [2024-11-20 09:09:50.155493] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.259 [2024-11-20 09:09:50.167820] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.259 [2024-11-20 09:09:50.168171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-11-20 09:09:50.168189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.259 [2024-11-20 09:09:50.168201] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.259 [2024-11-20 09:09:50.168377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.259 [2024-11-20 09:09:50.168552] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.259 [2024-11-20 09:09:50.168561] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.259 [2024-11-20 09:09:50.168568] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.259 [2024-11-20 09:09:50.168575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.259 [2024-11-20 09:09:50.180882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.259 [2024-11-20 09:09:50.181299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-11-20 09:09:50.181316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.259 [2024-11-20 09:09:50.181324] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.259 [2024-11-20 09:09:50.181500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.259 [2024-11-20 09:09:50.181676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.259 [2024-11-20 09:09:50.181684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.259 [2024-11-20 09:09:50.181691] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.259 [2024-11-20 09:09:50.181697] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.259 [2024-11-20 09:09:50.193892] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.259 [2024-11-20 09:09:50.194341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-11-20 09:09:50.194359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.259 [2024-11-20 09:09:50.194366] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.259 [2024-11-20 09:09:50.194543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.259 [2024-11-20 09:09:50.194720] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.259 [2024-11-20 09:09:50.194728] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.259 [2024-11-20 09:09:50.194735] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.259 [2024-11-20 09:09:50.194741] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.259 [2024-11-20 09:09:50.207050] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.259 [2024-11-20 09:09:50.207480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-11-20 09:09:50.207497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.259 [2024-11-20 09:09:50.207504] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.259 [2024-11-20 09:09:50.207680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.259 [2024-11-20 09:09:50.207860] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.259 [2024-11-20 09:09:50.207869] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.259 [2024-11-20 09:09:50.207876] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.259 [2024-11-20 09:09:50.207883] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.259 [2024-11-20 09:09:50.220194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.259 [2024-11-20 09:09:50.220612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-11-20 09:09:50.220657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.259 [2024-11-20 09:09:50.220681] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.259 [2024-11-20 09:09:50.221165] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.259 [2024-11-20 09:09:50.221343] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.259 [2024-11-20 09:09:50.221352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.259 [2024-11-20 09:09:50.221359] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.259 [2024-11-20 09:09:50.221365] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.259 [2024-11-20 09:09:50.233333] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.259 [2024-11-20 09:09:50.233682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.259 [2024-11-20 09:09:50.233699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.260 [2024-11-20 09:09:50.233707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.260 [2024-11-20 09:09:50.233883] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.260 [2024-11-20 09:09:50.234068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.260 [2024-11-20 09:09:50.234077] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.260 [2024-11-20 09:09:50.234083] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.260 [2024-11-20 09:09:50.234090] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.260 [2024-11-20 09:09:50.246383] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.260 [2024-11-20 09:09:50.246787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.260 [2024-11-20 09:09:50.246804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.260 [2024-11-20 09:09:50.246811] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.260 [2024-11-20 09:09:50.246993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.260 [2024-11-20 09:09:50.247171] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.260 [2024-11-20 09:09:50.247179] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.260 [2024-11-20 09:09:50.247190] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.260 [2024-11-20 09:09:50.247197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.260 [2024-11-20 09:09:50.259502] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.260 [2024-11-20 09:09:50.259917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.260 [2024-11-20 09:09:50.259934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.260 [2024-11-20 09:09:50.259942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.260 [2024-11-20 09:09:50.260124] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.260 [2024-11-20 09:09:50.260300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.260 [2024-11-20 09:09:50.260309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.260 [2024-11-20 09:09:50.260315] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.260 [2024-11-20 09:09:50.260322] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.260 [2024-11-20 09:09:50.272621] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.260 [2024-11-20 09:09:50.273032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.260 [2024-11-20 09:09:50.273050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.260 [2024-11-20 09:09:50.273057] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.260 [2024-11-20 09:09:50.273234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.260 [2024-11-20 09:09:50.273411] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.260 [2024-11-20 09:09:50.273420] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.260 [2024-11-20 09:09:50.273426] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.260 [2024-11-20 09:09:50.273433] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.260 [2024-11-20 09:09:50.285604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.260 [2024-11-20 09:09:50.285931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.260 [2024-11-20 09:09:50.285953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.260 [2024-11-20 09:09:50.285961] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.260 [2024-11-20 09:09:50.286152] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.260 [2024-11-20 09:09:50.286329] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.260 [2024-11-20 09:09:50.286337] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.260 [2024-11-20 09:09:50.286343] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.260 [2024-11-20 09:09:50.286350] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.521 [2024-11-20 09:09:50.298652] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.521 [2024-11-20 09:09:50.299045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.521 [2024-11-20 09:09:50.299061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.521 [2024-11-20 09:09:50.299069] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.521 [2024-11-20 09:09:50.299245] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.521 [2024-11-20 09:09:50.299422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.521 [2024-11-20 09:09:50.299430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.521 [2024-11-20 09:09:50.299437] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.521 [2024-11-20 09:09:50.299443] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.521 [2024-11-20 09:09:50.311755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.521 [2024-11-20 09:09:50.312171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.521 [2024-11-20 09:09:50.312189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.521 [2024-11-20 09:09:50.312196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.521 [2024-11-20 09:09:50.312372] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.521 [2024-11-20 09:09:50.312549] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.521 [2024-11-20 09:09:50.312557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.521 [2024-11-20 09:09:50.312564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.521 [2024-11-20 09:09:50.312570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.521 [2024-11-20 09:09:50.324756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.521 [2024-11-20 09:09:50.325172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.521 [2024-11-20 09:09:50.325216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.521 [2024-11-20 09:09:50.325239] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.521 [2024-11-20 09:09:50.325814] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.521 [2024-11-20 09:09:50.326366] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.521 [2024-11-20 09:09:50.326374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.521 [2024-11-20 09:09:50.326381] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.521 [2024-11-20 09:09:50.326388] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.521 [2024-11-20 09:09:50.337791] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.521 [2024-11-20 09:09:50.338224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.521 [2024-11-20 09:09:50.338264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.521 [2024-11-20 09:09:50.338298] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.521 [2024-11-20 09:09:50.338874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.521 [2024-11-20 09:09:50.339135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.521 [2024-11-20 09:09:50.339145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.521 [2024-11-20 09:09:50.339152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.521 [2024-11-20 09:09:50.339159] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.521 [2024-11-20 09:09:50.350760] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.521 [2024-11-20 09:09:50.351161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.521 [2024-11-20 09:09:50.351179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.521 [2024-11-20 09:09:50.351186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.521 [2024-11-20 09:09:50.351358] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.521 [2024-11-20 09:09:50.351531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.521 [2024-11-20 09:09:50.351540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.521 [2024-11-20 09:09:50.351546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.521 [2024-11-20 09:09:50.351552] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.521 [2024-11-20 09:09:50.363639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.521 [2024-11-20 09:09:50.364048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.521 [2024-11-20 09:09:50.364065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.521 [2024-11-20 09:09:50.364073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.521 [2024-11-20 09:09:50.364244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.521 [2024-11-20 09:09:50.364414] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.521 [2024-11-20 09:09:50.364423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.521 [2024-11-20 09:09:50.364429] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.521 [2024-11-20 09:09:50.364435] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.521 [2024-11-20 09:09:50.376470] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.521 [2024-11-20 09:09:50.376840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.521 [2024-11-20 09:09:50.376855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.521 [2024-11-20 09:09:50.376862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.521 [2024-11-20 09:09:50.377049] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.521 [2024-11-20 09:09:50.377223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.522 [2024-11-20 09:09:50.377231] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.522 [2024-11-20 09:09:50.377237] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.522 [2024-11-20 09:09:50.377244] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.522 [2024-11-20 09:09:50.389399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.522 [2024-11-20 09:09:50.389815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.522 [2024-11-20 09:09:50.389831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.522 [2024-11-20 09:09:50.389838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.522 [2024-11-20 09:09:50.390014] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.522 [2024-11-20 09:09:50.390185] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.522 [2024-11-20 09:09:50.390193] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.522 [2024-11-20 09:09:50.390200] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.522 [2024-11-20 09:09:50.390206] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.522 [2024-11-20 09:09:50.402435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.522 [2024-11-20 09:09:50.402863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.522 [2024-11-20 09:09:50.402907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.522 [2024-11-20 09:09:50.402930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.522 [2024-11-20 09:09:50.403446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.522 [2024-11-20 09:09:50.403618] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.522 [2024-11-20 09:09:50.403626] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.522 [2024-11-20 09:09:50.403632] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.522 [2024-11-20 09:09:50.403639] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.522 [2024-11-20 09:09:50.415257] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.522 [2024-11-20 09:09:50.415686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.522 [2024-11-20 09:09:50.415703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.522 [2024-11-20 09:09:50.415710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.522 [2024-11-20 09:09:50.415881] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.522 [2024-11-20 09:09:50.416059] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.522 [2024-11-20 09:09:50.416069] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.522 [2024-11-20 09:09:50.416080] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.522 [2024-11-20 09:09:50.416087] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.522 [2024-11-20 09:09:50.428304] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.522 [2024-11-20 09:09:50.428731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.522 [2024-11-20 09:09:50.428748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.522 [2024-11-20 09:09:50.428755] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.522 [2024-11-20 09:09:50.428932] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.522 [2024-11-20 09:09:50.429124] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.522 [2024-11-20 09:09:50.429133] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.522 [2024-11-20 09:09:50.429139] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.522 [2024-11-20 09:09:50.429145] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.522 [2024-11-20 09:09:50.441254] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.522 [2024-11-20 09:09:50.441683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.522 [2024-11-20 09:09:50.441727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.522 [2024-11-20 09:09:50.441750] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.522 [2024-11-20 09:09:50.442344] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.522 [2024-11-20 09:09:50.442515] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.522 [2024-11-20 09:09:50.442524] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.522 [2024-11-20 09:09:50.442530] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.522 [2024-11-20 09:09:50.442536] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.522 [2024-11-20 09:09:50.454193] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.522 [2024-11-20 09:09:50.454575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.522 [2024-11-20 09:09:50.454619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.522 [2024-11-20 09:09:50.454642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.522 [2024-11-20 09:09:50.455234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.522 [2024-11-20 09:09:50.455679] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.522 [2024-11-20 09:09:50.455687] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.522 [2024-11-20 09:09:50.455694] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.522 [2024-11-20 09:09:50.455700] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.522 [2024-11-20 09:09:50.469290] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.522 [2024-11-20 09:09:50.469815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.522 [2024-11-20 09:09:50.469867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.522 [2024-11-20 09:09:50.469890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.522 [2024-11-20 09:09:50.470419] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.522 [2024-11-20 09:09:50.470672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.522 [2024-11-20 09:09:50.470684] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.522 [2024-11-20 09:09:50.470693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.522 [2024-11-20 09:09:50.470704] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.522 [2024-11-20 09:09:50.482258] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.522 [2024-11-20 09:09:50.482687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.522 [2024-11-20 09:09:50.482730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.522 [2024-11-20 09:09:50.482753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.522 [2024-11-20 09:09:50.483266] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.522 [2024-11-20 09:09:50.483438] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.522 [2024-11-20 09:09:50.483446] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.522 [2024-11-20 09:09:50.483452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.522 [2024-11-20 09:09:50.483458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.522 [2024-11-20 09:09:50.495102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.522 [2024-11-20 09:09:50.495554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.522 [2024-11-20 09:09:50.495596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.522 [2024-11-20 09:09:50.495618] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.522 [2024-11-20 09:09:50.496207] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.522 [2024-11-20 09:09:50.496743] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.522 [2024-11-20 09:09:50.496751] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.522 [2024-11-20 09:09:50.496757] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.522 [2024-11-20 09:09:50.496763] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.522 [2024-11-20 09:09:50.507935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.522 [2024-11-20 09:09:50.508355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.522 [2024-11-20 09:09:50.508371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.522 [2024-11-20 09:09:50.508382] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.522 [2024-11-20 09:09:50.508553] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.522 [2024-11-20 09:09:50.508724] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.523 [2024-11-20 09:09:50.508733] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.523 [2024-11-20 09:09:50.508739] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.523 [2024-11-20 09:09:50.508745] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.523 [2024-11-20 09:09:50.520843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.523 [2024-11-20 09:09:50.521184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.523 [2024-11-20 09:09:50.521201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.523 [2024-11-20 09:09:50.521208] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.523 [2024-11-20 09:09:50.521379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.523 [2024-11-20 09:09:50.521551] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.523 [2024-11-20 09:09:50.521559] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.523 [2024-11-20 09:09:50.521565] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.523 [2024-11-20 09:09:50.521572] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.523 [2024-11-20 09:09:50.533756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.523 [2024-11-20 09:09:50.534211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.523 [2024-11-20 09:09:50.534256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.523 [2024-11-20 09:09:50.534278] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.523 [2024-11-20 09:09:50.534709] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.523 [2024-11-20 09:09:50.534880] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.523 [2024-11-20 09:09:50.534889] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.523 [2024-11-20 09:09:50.534896] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.523 [2024-11-20 09:09:50.534902] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.523 [2024-11-20 09:09:50.546623] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.523 [2024-11-20 09:09:50.547064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.523 [2024-11-20 09:09:50.547082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.523 [2024-11-20 09:09:50.547090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.523 [2024-11-20 09:09:50.547261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.523 [2024-11-20 09:09:50.547437] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.523 [2024-11-20 09:09:50.547445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.523 [2024-11-20 09:09:50.547452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.523 [2024-11-20 09:09:50.547458] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.784 [2024-11-20 09:09:50.559765] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.784 [2024-11-20 09:09:50.560216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.784 [2024-11-20 09:09:50.560260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.784 [2024-11-20 09:09:50.560284] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.784 [2024-11-20 09:09:50.560860] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.784 [2024-11-20 09:09:50.561097] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.784 [2024-11-20 09:09:50.561105] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.784 [2024-11-20 09:09:50.561112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.784 [2024-11-20 09:09:50.561118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.784 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2472736 Killed "${NVMF_APP[@]}" "$@" 00:25:34.784 09:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:25:34.784 09:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:25:34.784 09:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:34.784 09:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:34.784 09:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:34.784 [2024-11-20 09:09:50.572817] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.784 [2024-11-20 09:09:50.573249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.784 [2024-11-20 09:09:50.573265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.784 [2024-11-20 09:09:50.573272] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.784 [2024-11-20 09:09:50.573448] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.784 [2024-11-20 09:09:50.573624] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.784 [2024-11-20 09:09:50.573632] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.784 [2024-11-20 09:09:50.573638] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.784 [2024-11-20 09:09:50.573645] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.784 09:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # nvmfpid=2473928 00:25:34.784 09:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # waitforlisten 2473928 00:25:34.784 09:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:25:34.784 09:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 2473928 ']' 00:25:34.784 09:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:34.784 09:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:34.784 09:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:34.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:34.784 09:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:34.784 09:09:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:34.784 [2024-11-20 09:09:50.586204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.784 [2024-11-20 09:09:50.586583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.784 [2024-11-20 09:09:50.586601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.784 [2024-11-20 09:09:50.586609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.784 [2024-11-20 09:09:50.586785] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.784 [2024-11-20 09:09:50.586968] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.784 [2024-11-20 09:09:50.586981] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.785 [2024-11-20 09:09:50.586988] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.785 [2024-11-20 09:09:50.586994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.785 [2024-11-20 09:09:50.599330] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.785 [2024-11-20 09:09:50.599767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.785 [2024-11-20 09:09:50.599785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.785 [2024-11-20 09:09:50.599792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.785 [2024-11-20 09:09:50.599976] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.785 [2024-11-20 09:09:50.600154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.785 [2024-11-20 09:09:50.600163] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.785 [2024-11-20 09:09:50.600169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.785 [2024-11-20 09:09:50.600176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.785 [2024-11-20 09:09:50.612378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.785 [2024-11-20 09:09:50.612806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.785 [2024-11-20 09:09:50.612823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.785 [2024-11-20 09:09:50.612830] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.785 [2024-11-20 09:09:50.613034] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.785 [2024-11-20 09:09:50.613212] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.785 [2024-11-20 09:09:50.613221] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.785 [2024-11-20 09:09:50.613231] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.785 [2024-11-20 09:09:50.613237] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.785 [2024-11-20 09:09:50.625441] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.785 [2024-11-20 09:09:50.625758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.785 [2024-11-20 09:09:50.625775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.785 [2024-11-20 09:09:50.625782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.785 [2024-11-20 09:09:50.625960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.785 [2024-11-20 09:09:50.626132] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.785 [2024-11-20 09:09:50.626140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.785 [2024-11-20 09:09:50.626147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.785 [2024-11-20 09:09:50.626153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.785 [2024-11-20 09:09:50.631328] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:25:34.785 [2024-11-20 09:09:50.631377] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:34.785 [2024-11-20 09:09:50.638344] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.785 [2024-11-20 09:09:50.638672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.785 [2024-11-20 09:09:50.638690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.785 [2024-11-20 09:09:50.638698] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.785 [2024-11-20 09:09:50.638869] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.785 [2024-11-20 09:09:50.639048] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.785 [2024-11-20 09:09:50.639058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.785 [2024-11-20 09:09:50.639064] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.785 [2024-11-20 09:09:50.639071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.785 [2024-11-20 09:09:50.651278] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.785 [2024-11-20 09:09:50.651716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.785 [2024-11-20 09:09:50.651761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.785 [2024-11-20 09:09:50.651784] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.785 [2024-11-20 09:09:50.652379] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.785 [2024-11-20 09:09:50.652971] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.785 [2024-11-20 09:09:50.653012] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.785 [2024-11-20 09:09:50.653034] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.785 [2024-11-20 09:09:50.653057] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.785 [2024-11-20 09:09:50.664367] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.785 [2024-11-20 09:09:50.664799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.785 [2024-11-20 09:09:50.664816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.785 [2024-11-20 09:09:50.664824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.785 [2024-11-20 09:09:50.665006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.785 [2024-11-20 09:09:50.665182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.785 [2024-11-20 09:09:50.665191] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.785 [2024-11-20 09:09:50.665197] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.785 [2024-11-20 09:09:50.665204] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.785 [2024-11-20 09:09:50.677511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.785 [2024-11-20 09:09:50.677971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.785 [2024-11-20 09:09:50.677989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.785 [2024-11-20 09:09:50.678002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.785 [2024-11-20 09:09:50.678179] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.785 [2024-11-20 09:09:50.678356] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.785 [2024-11-20 09:09:50.678364] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.785 [2024-11-20 09:09:50.678371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.785 [2024-11-20 09:09:50.678378] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.785 [2024-11-20 09:09:50.690540] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.785 [2024-11-20 09:09:50.690933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.785 [2024-11-20 09:09:50.690958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.785 [2024-11-20 09:09:50.690967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.785 [2024-11-20 09:09:50.691144] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.785 [2024-11-20 09:09:50.691320] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.786 [2024-11-20 09:09:50.691331] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.786 [2024-11-20 09:09:50.691340] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.786 [2024-11-20 09:09:50.691352] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.786 [2024-11-20 09:09:50.703705] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.786 [2024-11-20 09:09:50.704114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.786 [2024-11-20 09:09:50.704132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.786 [2024-11-20 09:09:50.704141] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.786 [2024-11-20 09:09:50.704317] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.786 [2024-11-20 09:09:50.704499] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.786 [2024-11-20 09:09:50.704507] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.786 [2024-11-20 09:09:50.704514] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.786 [2024-11-20 09:09:50.704521] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.786 [2024-11-20 09:09:50.713125] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:25:34.786 [2024-11-20 09:09:50.716840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.786 [2024-11-20 09:09:50.717187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.786 [2024-11-20 09:09:50.717205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.786 [2024-11-20 09:09:50.717213] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.786 [2024-11-20 09:09:50.717390] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.786 [2024-11-20 09:09:50.717567] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.786 [2024-11-20 09:09:50.717576] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.786 [2024-11-20 09:09:50.717583] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.786 [2024-11-20 09:09:50.717591] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.786 [2024-11-20 09:09:50.729885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.786 [2024-11-20 09:09:50.730254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.786 [2024-11-20 09:09:50.730272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.786 [2024-11-20 09:09:50.730279] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.786 [2024-11-20 09:09:50.730450] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.786 [2024-11-20 09:09:50.730625] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.786 [2024-11-20 09:09:50.730634] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.786 [2024-11-20 09:09:50.730640] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.786 [2024-11-20 09:09:50.730647] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.786 [2024-11-20 09:09:50.742977] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.786 [2024-11-20 09:09:50.743324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.786 [2024-11-20 09:09:50.743341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.786 [2024-11-20 09:09:50.743348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.786 [2024-11-20 09:09:50.743520] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.786 [2024-11-20 09:09:50.743693] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.786 [2024-11-20 09:09:50.743702] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.786 [2024-11-20 09:09:50.743710] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.786 [2024-11-20 09:09:50.743716] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.786 [2024-11-20 09:09:50.754212] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:34.786 [2024-11-20 09:09:50.754239] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:34.786 [2024-11-20 09:09:50.754247] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:34.786 [2024-11-20 09:09:50.754253] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:34.786 [2024-11-20 09:09:50.754258] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:34.786 [2024-11-20 09:09:50.755692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:25:34.786 [2024-11-20 09:09:50.755802] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:25:34.786 [2024-11-20 09:09:50.755803] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:25:34.786 [2024-11-20 09:09:50.756123] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.786 [2024-11-20 09:09:50.756540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.786 [2024-11-20 09:09:50.756557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.786 [2024-11-20 09:09:50.756564] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.786 [2024-11-20 09:09:50.756742] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.786 [2024-11-20 09:09:50.756919] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.786 [2024-11-20 09:09:50.756927] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.786 [2024-11-20 09:09:50.756934] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.786 [2024-11-20 09:09:50.756941] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.786 [2024-11-20 09:09:50.769279] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.786 [2024-11-20 09:09:50.769648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.786 [2024-11-20 09:09:50.769668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.786 [2024-11-20 09:09:50.769676] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.786 [2024-11-20 09:09:50.769854] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.786 [2024-11-20 09:09:50.770035] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.786 [2024-11-20 09:09:50.770049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.786 [2024-11-20 09:09:50.770056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.786 [2024-11-20 09:09:50.770063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.786 [2024-11-20 09:09:50.782388] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.786 [2024-11-20 09:09:50.782715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.786 [2024-11-20 09:09:50.782735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.786 [2024-11-20 09:09:50.782744] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.786 [2024-11-20 09:09:50.782921] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.786 [2024-11-20 09:09:50.783104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.786 [2024-11-20 09:09:50.783113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.786 [2024-11-20 09:09:50.783120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.786 [2024-11-20 09:09:50.783127] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.786 [2024-11-20 09:09:50.795444] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.786 [2024-11-20 09:09:50.795901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.787 [2024-11-20 09:09:50.795920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.787 [2024-11-20 09:09:50.795929] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.787 [2024-11-20 09:09:50.796113] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.787 [2024-11-20 09:09:50.796291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.787 [2024-11-20 09:09:50.796300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.787 [2024-11-20 09:09:50.796307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.787 [2024-11-20 09:09:50.796314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:34.787 [2024-11-20 09:09:50.808641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:34.787 [2024-11-20 09:09:50.809100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:34.787 [2024-11-20 09:09:50.809122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:34.787 [2024-11-20 09:09:50.809131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:34.787 [2024-11-20 09:09:50.809309] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:34.787 [2024-11-20 09:09:50.809486] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:34.787 [2024-11-20 09:09:50.809494] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:34.787 [2024-11-20 09:09:50.809502] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:34.787 [2024-11-20 09:09:50.809515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.047 [2024-11-20 09:09:50.821689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.047 [2024-11-20 09:09:50.822102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.047 [2024-11-20 09:09:50.822120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.047 [2024-11-20 09:09:50.822129] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.047 [2024-11-20 09:09:50.822306] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.047 [2024-11-20 09:09:50.822483] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.047 [2024-11-20 09:09:50.822492] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.047 [2024-11-20 09:09:50.822499] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.047 [2024-11-20 09:09:50.822507] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.047 [2024-11-20 09:09:50.834816] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.047 [2024-11-20 09:09:50.835161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.047 [2024-11-20 09:09:50.835179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.047 [2024-11-20 09:09:50.835186] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.047 [2024-11-20 09:09:50.835363] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.047 [2024-11-20 09:09:50.835539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.048 [2024-11-20 09:09:50.835547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.048 [2024-11-20 09:09:50.835553] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.048 [2024-11-20 09:09:50.835560] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.048 [2024-11-20 09:09:50.847868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.048 [2024-11-20 09:09:50.848284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.048 [2024-11-20 09:09:50.848300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.048 [2024-11-20 09:09:50.848308] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.048 [2024-11-20 09:09:50.848484] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.048 [2024-11-20 09:09:50.848661] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.048 [2024-11-20 09:09:50.848669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.048 [2024-11-20 09:09:50.848676] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.048 [2024-11-20 09:09:50.848683] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.048 [2024-11-20 09:09:50.860999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.048 [2024-11-20 09:09:50.861362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.048 [2024-11-20 09:09:50.861379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.048 [2024-11-20 09:09:50.861387] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.048 [2024-11-20 09:09:50.861563] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.048 [2024-11-20 09:09:50.861740] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.048 [2024-11-20 09:09:50.861748] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.048 [2024-11-20 09:09:50.861754] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.048 [2024-11-20 09:09:50.861761] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.048 [2024-11-20 09:09:50.874075] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.048 [2024-11-20 09:09:50.874457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.048 [2024-11-20 09:09:50.874474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.048 [2024-11-20 09:09:50.874481] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.048 [2024-11-20 09:09:50.874657] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.048 [2024-11-20 09:09:50.874837] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.048 [2024-11-20 09:09:50.874846] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.048 [2024-11-20 09:09:50.874852] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.048 [2024-11-20 09:09:50.874858] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.048 [2024-11-20 09:09:50.887178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.048 [2024-11-20 09:09:50.887654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.048 [2024-11-20 09:09:50.887672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.048 [2024-11-20 09:09:50.887679] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.048 [2024-11-20 09:09:50.887855] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.048 [2024-11-20 09:09:50.888036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.048 [2024-11-20 09:09:50.888045] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.048 [2024-11-20 09:09:50.888052] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.048 [2024-11-20 09:09:50.888059] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.048 [2024-11-20 09:09:50.900204] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.048 [2024-11-20 09:09:50.900574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.048 [2024-11-20 09:09:50.900590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.048 [2024-11-20 09:09:50.900598] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.048 [2024-11-20 09:09:50.900778] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.048 [2024-11-20 09:09:50.900959] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.048 [2024-11-20 09:09:50.900968] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.048 [2024-11-20 09:09:50.900975] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.048 [2024-11-20 09:09:50.900982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.048 [2024-11-20 09:09:50.913309] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.048 [2024-11-20 09:09:50.913745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.048 [2024-11-20 09:09:50.913762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.048 [2024-11-20 09:09:50.913770] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.048 [2024-11-20 09:09:50.913952] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.048 [2024-11-20 09:09:50.914128] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.048 [2024-11-20 09:09:50.914136] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.048 [2024-11-20 09:09:50.914143] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.048 [2024-11-20 09:09:50.914149] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.048 [2024-11-20 09:09:50.926452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.048 [2024-11-20 09:09:50.926875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.048 [2024-11-20 09:09:50.926891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.048 [2024-11-20 09:09:50.926899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.048 [2024-11-20 09:09:50.927080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.048 [2024-11-20 09:09:50.927259] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.048 [2024-11-20 09:09:50.927267] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.048 [2024-11-20 09:09:50.927274] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.048 [2024-11-20 09:09:50.927280] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.048 [2024-11-20 09:09:50.939598] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.048 [2024-11-20 09:09:50.939983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.048 [2024-11-20 09:09:50.940000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.048 [2024-11-20 09:09:50.940008] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.048 [2024-11-20 09:09:50.940185] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.048 [2024-11-20 09:09:50.940362] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.048 [2024-11-20 09:09:50.940374] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.048 [2024-11-20 09:09:50.940382] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.048 [2024-11-20 09:09:50.940390] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.048 [2024-11-20 09:09:50.952704] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.048 [2024-11-20 09:09:50.953168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.048 [2024-11-20 09:09:50.953186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.048 [2024-11-20 09:09:50.953194] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.048 [2024-11-20 09:09:50.953370] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.048 [2024-11-20 09:09:50.953547] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.048 [2024-11-20 09:09:50.953556] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.048 [2024-11-20 09:09:50.953563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.048 [2024-11-20 09:09:50.953570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.048 [2024-11-20 09:09:50.965754] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.048 [2024-11-20 09:09:50.966149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.048 [2024-11-20 09:09:50.966166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.049 [2024-11-20 09:09:50.966173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.049 [2024-11-20 09:09:50.966350] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.049 [2024-11-20 09:09:50.966527] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.049 [2024-11-20 09:09:50.966536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.049 [2024-11-20 09:09:50.966542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.049 [2024-11-20 09:09:50.966549] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.049 [2024-11-20 09:09:50.978863] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.049 [2024-11-20 09:09:50.979161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.049 [2024-11-20 09:09:50.979178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.049 [2024-11-20 09:09:50.979185] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.049 [2024-11-20 09:09:50.979361] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.049 [2024-11-20 09:09:50.979539] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.049 [2024-11-20 09:09:50.979547] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.049 [2024-11-20 09:09:50.979554] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.049 [2024-11-20 09:09:50.979564] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.049 [2024-11-20 09:09:50.992045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.049 [2024-11-20 09:09:50.992453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.049 [2024-11-20 09:09:50.992469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.049 [2024-11-20 09:09:50.992477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.049 [2024-11-20 09:09:50.992653] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.049 [2024-11-20 09:09:50.992828] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.049 [2024-11-20 09:09:50.992836] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.049 [2024-11-20 09:09:50.992843] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.049 [2024-11-20 09:09:50.992849] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.049 [2024-11-20 09:09:51.005163] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.049 [2024-11-20 09:09:51.005532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.049 [2024-11-20 09:09:51.005548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.049 [2024-11-20 09:09:51.005556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.049 [2024-11-20 09:09:51.005732] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.049 [2024-11-20 09:09:51.005908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.049 [2024-11-20 09:09:51.005916] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.049 [2024-11-20 09:09:51.005923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.049 [2024-11-20 09:09:51.005929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.049 [2024-11-20 09:09:51.018272] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.049 [2024-11-20 09:09:51.018670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.049 [2024-11-20 09:09:51.018687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.049 [2024-11-20 09:09:51.018694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.049 [2024-11-20 09:09:51.018873] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.049 [2024-11-20 09:09:51.019056] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.049 [2024-11-20 09:09:51.019065] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.049 [2024-11-20 09:09:51.019072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.049 [2024-11-20 09:09:51.019078] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.049 4731.17 IOPS, 18.48 MiB/s [2024-11-20T08:09:51.090Z] [2024-11-20 09:09:51.031354] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.049 [2024-11-20 09:09:51.032082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.049 [2024-11-20 09:09:51.032100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.049 [2024-11-20 09:09:51.032107] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.049 [2024-11-20 09:09:51.032284] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.049 [2024-11-20 09:09:51.032461] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.049 [2024-11-20 09:09:51.032469] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.049 [2024-11-20 09:09:51.032476] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.049 [2024-11-20 09:09:51.032482] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.049 [2024-11-20 09:09:51.044464] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.049 [2024-11-20 09:09:51.044874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.049 [2024-11-20 09:09:51.044891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.049 [2024-11-20 09:09:51.044899] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.049 [2024-11-20 09:09:51.045080] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.049 [2024-11-20 09:09:51.045258] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.049 [2024-11-20 09:09:51.045266] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.049 [2024-11-20 09:09:51.045273] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.049 [2024-11-20 09:09:51.045279] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.049 [2024-11-20 09:09:51.057597] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.049 [2024-11-20 09:09:51.058031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.049 [2024-11-20 09:09:51.058048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.049 [2024-11-20 09:09:51.058056] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.049 [2024-11-20 09:09:51.058232] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.049 [2024-11-20 09:09:51.058408] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.049 [2024-11-20 09:09:51.058416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.049 [2024-11-20 09:09:51.058422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.049 [2024-11-20 09:09:51.058428] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.049 [2024-11-20 09:09:51.070743] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.049 [2024-11-20 09:09:51.071148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.049 [2024-11-20 09:09:51.071165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.049 [2024-11-20 09:09:51.071173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.049 [2024-11-20 09:09:51.071353] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.049 [2024-11-20 09:09:51.071530] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.049 [2024-11-20 09:09:51.071539] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.049 [2024-11-20 09:09:51.071546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.049 [2024-11-20 09:09:51.071552] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.049 [2024-11-20 09:09:51.083864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.049 [2024-11-20 09:09:51.084153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.049 [2024-11-20 09:09:51.084171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.049 [2024-11-20 09:09:51.084179] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.049 [2024-11-20 09:09:51.084355] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.049 [2024-11-20 09:09:51.084532] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.049 [2024-11-20 09:09:51.084540] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.049 [2024-11-20 09:09:51.084546] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.049 [2024-11-20 09:09:51.084553] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.310 [2024-11-20 09:09:51.097024] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.310 [2024-11-20 09:09:51.097361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.310 [2024-11-20 09:09:51.097378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.310 [2024-11-20 09:09:51.097385] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.310 [2024-11-20 09:09:51.097561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.310 [2024-11-20 09:09:51.097738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.310 [2024-11-20 09:09:51.097746] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.310 [2024-11-20 09:09:51.097753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.310 [2024-11-20 09:09:51.097759] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.310 [2024-11-20 09:09:51.110062] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.310 [2024-11-20 09:09:51.110470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.310 [2024-11-20 09:09:51.110486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.310 [2024-11-20 09:09:51.110494] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.310 [2024-11-20 09:09:51.110670] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.310 [2024-11-20 09:09:51.110847] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.310 [2024-11-20 09:09:51.110858] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.310 [2024-11-20 09:09:51.110864] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.310 [2024-11-20 09:09:51.110871] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.310 [2024-11-20 09:09:51.123185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.310 [2024-11-20 09:09:51.123591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.310 [2024-11-20 09:09:51.123607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.310 [2024-11-20 09:09:51.123615] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.310 [2024-11-20 09:09:51.123791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.310 [2024-11-20 09:09:51.123972] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.310 [2024-11-20 09:09:51.123980] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.310 [2024-11-20 09:09:51.123987] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.310 [2024-11-20 09:09:51.123994] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.310 [2024-11-20 09:09:51.136286] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.310 [2024-11-20 09:09:51.136678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.310 [2024-11-20 09:09:51.136694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.310 [2024-11-20 09:09:51.136701] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.310 [2024-11-20 09:09:51.136876] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.310 [2024-11-20 09:09:51.137058] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.310 [2024-11-20 09:09:51.137067] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.310 [2024-11-20 09:09:51.137074] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.310 [2024-11-20 09:09:51.137080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.310 [2024-11-20 09:09:51.149372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.310 [2024-11-20 09:09:51.149710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.310 [2024-11-20 09:09:51.149727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.310 [2024-11-20 09:09:51.149734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.310 [2024-11-20 09:09:51.149911] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.310 [2024-11-20 09:09:51.150092] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.311 [2024-11-20 09:09:51.150101] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.311 [2024-11-20 09:09:51.150108] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.311 [2024-11-20 09:09:51.150118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.311 [2024-11-20 09:09:51.162409] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.311 [2024-11-20 09:09:51.162816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.311 [2024-11-20 09:09:51.162833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.311 [2024-11-20 09:09:51.162841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.311 [2024-11-20 09:09:51.163021] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.311 [2024-11-20 09:09:51.163198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.311 [2024-11-20 09:09:51.163206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.311 [2024-11-20 09:09:51.163213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.311 [2024-11-20 09:09:51.163219] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.311 [2024-11-20 09:09:51.175511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.311 [2024-11-20 09:09:51.175898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.311 [2024-11-20 09:09:51.175915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.311 [2024-11-20 09:09:51.175922] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.311 [2024-11-20 09:09:51.176102] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.311 [2024-11-20 09:09:51.176278] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.311 [2024-11-20 09:09:51.176287] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.311 [2024-11-20 09:09:51.176293] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.311 [2024-11-20 09:09:51.176299] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.311 [2024-11-20 09:09:51.188604] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.311 [2024-11-20 09:09:51.189011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.311 [2024-11-20 09:09:51.189030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.311 [2024-11-20 09:09:51.189037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.311 [2024-11-20 09:09:51.189213] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.311 [2024-11-20 09:09:51.189391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.311 [2024-11-20 09:09:51.189399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.311 [2024-11-20 09:09:51.189406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.311 [2024-11-20 09:09:51.189412] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.311 [2024-11-20 09:09:51.201710] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.311 [2024-11-20 09:09:51.202063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.311 [2024-11-20 09:09:51.202080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.311 [2024-11-20 09:09:51.202087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.311 [2024-11-20 09:09:51.202263] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.311 [2024-11-20 09:09:51.202440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.311 [2024-11-20 09:09:51.202448] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.311 [2024-11-20 09:09:51.202455] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.311 [2024-11-20 09:09:51.202461] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.311 [2024-11-20 09:09:51.214780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.311 [2024-11-20 09:09:51.215212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.311 [2024-11-20 09:09:51.215229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.311 [2024-11-20 09:09:51.215237] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.311 [2024-11-20 09:09:51.215413] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.311 [2024-11-20 09:09:51.215589] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.311 [2024-11-20 09:09:51.215597] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.311 [2024-11-20 09:09:51.215604] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.311 [2024-11-20 09:09:51.215611] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.311 [2024-11-20 09:09:51.227906] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.311 [2024-11-20 09:09:51.228266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.311 [2024-11-20 09:09:51.228283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.311 [2024-11-20 09:09:51.228290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.311 [2024-11-20 09:09:51.228467] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.311 [2024-11-20 09:09:51.228644] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.311 [2024-11-20 09:09:51.228652] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.311 [2024-11-20 09:09:51.228659] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.311 [2024-11-20 09:09:51.228665] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.311 [2024-11-20 09:09:51.240976] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.311 [2024-11-20 09:09:51.241274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.311 [2024-11-20 09:09:51.241292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.311 [2024-11-20 09:09:51.241299] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.311 [2024-11-20 09:09:51.241479] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.311 [2024-11-20 09:09:51.241656] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.311 [2024-11-20 09:09:51.241665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.311 [2024-11-20 09:09:51.241672] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.311 [2024-11-20 09:09:51.241678] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.311 [2024-11-20 09:09:51.254015] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.311 [2024-11-20 09:09:51.254427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.311 [2024-11-20 09:09:51.254444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.311 [2024-11-20 09:09:51.254451] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.311 [2024-11-20 09:09:51.254627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.311 [2024-11-20 09:09:51.254803] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.311 [2024-11-20 09:09:51.254811] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.311 [2024-11-20 09:09:51.254818] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.311 [2024-11-20 09:09:51.254825] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.311 [2024-11-20 09:09:51.267134] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.311 [2024-11-20 09:09:51.267468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.311 [2024-11-20 09:09:51.267485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.311 [2024-11-20 09:09:51.267492] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.311 [2024-11-20 09:09:51.267668] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.311 [2024-11-20 09:09:51.267845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.311 [2024-11-20 09:09:51.267854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.311 [2024-11-20 09:09:51.267860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.311 [2024-11-20 09:09:51.267867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.311 [2024-11-20 09:09:51.280178] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.311 [2024-11-20 09:09:51.280512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.311 [2024-11-20 09:09:51.280528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.311 [2024-11-20 09:09:51.280536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.311 [2024-11-20 09:09:51.280712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.311 [2024-11-20 09:09:51.280890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.312 [2024-11-20 09:09:51.280905] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.312 [2024-11-20 09:09:51.280912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.312 [2024-11-20 09:09:51.280919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.312 [2024-11-20 09:09:51.293228] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.312 [2024-11-20 09:09:51.293642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.312 [2024-11-20 09:09:51.293659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.312 [2024-11-20 09:09:51.293667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.312 [2024-11-20 09:09:51.293842] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.312 [2024-11-20 09:09:51.294024] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.312 [2024-11-20 09:09:51.294044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.312 [2024-11-20 09:09:51.294055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.312 [2024-11-20 09:09:51.294064] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.312 [2024-11-20 09:09:51.306372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.312 [2024-11-20 09:09:51.306804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.312 [2024-11-20 09:09:51.306821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.312 [2024-11-20 09:09:51.306829] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.312 [2024-11-20 09:09:51.307010] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.312 [2024-11-20 09:09:51.307187] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.312 [2024-11-20 09:09:51.307196] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.312 [2024-11-20 09:09:51.307203] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.312 [2024-11-20 09:09:51.307209] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.312 [2024-11-20 09:09:51.319535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.312 [2024-11-20 09:09:51.319977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.312 [2024-11-20 09:09:51.319995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.312 [2024-11-20 09:09:51.320002] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.312 [2024-11-20 09:09:51.320178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.312 [2024-11-20 09:09:51.320353] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.312 [2024-11-20 09:09:51.320362] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.312 [2024-11-20 09:09:51.320368] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.312 [2024-11-20 09:09:51.320379] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.312 [2024-11-20 09:09:51.332679] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.312 [2024-11-20 09:09:51.333010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.312 [2024-11-20 09:09:51.333028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.312 [2024-11-20 09:09:51.333035] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.312 [2024-11-20 09:09:51.333212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.312 [2024-11-20 09:09:51.333388] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.312 [2024-11-20 09:09:51.333396] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.312 [2024-11-20 09:09:51.333403] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.312 [2024-11-20 09:09:51.333409] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.312 [2024-11-20 09:09:51.345711] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.312 [2024-11-20 09:09:51.346142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.312 [2024-11-20 09:09:51.346159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.312 [2024-11-20 09:09:51.346166] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.312 [2024-11-20 09:09:51.346343] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.312 [2024-11-20 09:09:51.346519] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.312 [2024-11-20 09:09:51.346528] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.312 [2024-11-20 09:09:51.346534] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.312 [2024-11-20 09:09:51.346540] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.572 [2024-11-20 09:09:51.358836] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.572 [2024-11-20 09:09:51.359191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.572 [2024-11-20 09:09:51.359221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.572 [2024-11-20 09:09:51.359228] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.572 [2024-11-20 09:09:51.359404] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.572 [2024-11-20 09:09:51.359579] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.572 [2024-11-20 09:09:51.359588] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.572 [2024-11-20 09:09:51.359594] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.572 [2024-11-20 09:09:51.359600] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.572 [2024-11-20 09:09:51.371898] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.572 [2024-11-20 09:09:51.372336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.572 [2024-11-20 09:09:51.372353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.572 [2024-11-20 09:09:51.372360] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.572 [2024-11-20 09:09:51.372535] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.572 [2024-11-20 09:09:51.372712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.572 [2024-11-20 09:09:51.372721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.572 [2024-11-20 09:09:51.372728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.572 [2024-11-20 09:09:51.372735] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.572 [2024-11-20 09:09:51.385040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.572 [2024-11-20 09:09:51.385467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.572 [2024-11-20 09:09:51.385485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.572 [2024-11-20 09:09:51.385493] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.572 [2024-11-20 09:09:51.385669] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.572 [2024-11-20 09:09:51.385846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.572 [2024-11-20 09:09:51.385856] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.572 [2024-11-20 09:09:51.385863] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.572 [2024-11-20 09:09:51.385869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.572 [2024-11-20 09:09:51.398175] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.572 [2024-11-20 09:09:51.398606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.572 [2024-11-20 09:09:51.398622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.572 [2024-11-20 09:09:51.398630] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.572 [2024-11-20 09:09:51.398806] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.572 [2024-11-20 09:09:51.398988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.572 [2024-11-20 09:09:51.398997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.572 [2024-11-20 09:09:51.399004] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.572 [2024-11-20 09:09:51.399010] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.572 [2024-11-20 09:09:51.411318] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.572 [2024-11-20 09:09:51.411751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.572 [2024-11-20 09:09:51.411768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.572 [2024-11-20 09:09:51.411776] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.572 [2024-11-20 09:09:51.411960] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.572 [2024-11-20 09:09:51.412138] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.572 [2024-11-20 09:09:51.412147] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.572 [2024-11-20 09:09:51.412154] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.572 [2024-11-20 09:09:51.412160] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.572 [2024-11-20 09:09:51.424468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.572 [2024-11-20 09:09:51.424905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.572 [2024-11-20 09:09:51.424922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.572 [2024-11-20 09:09:51.424930] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.572 [2024-11-20 09:09:51.425112] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.572 [2024-11-20 09:09:51.425291] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.572 [2024-11-20 09:09:51.425300] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.573 [2024-11-20 09:09:51.425306] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.573 [2024-11-20 09:09:51.425313] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.573 [2024-11-20 09:09:51.437615] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.573 [2024-11-20 09:09:51.438024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-11-20 09:09:51.438041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.573 [2024-11-20 09:09:51.438049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.573 [2024-11-20 09:09:51.438225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.573 [2024-11-20 09:09:51.438402] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.573 [2024-11-20 09:09:51.438410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.573 [2024-11-20 09:09:51.438417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.573 [2024-11-20 09:09:51.438423] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.573 [2024-11-20 09:09:51.450720] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.573 [2024-11-20 09:09:51.451154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-11-20 09:09:51.451171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.573 [2024-11-20 09:09:51.451178] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.573 [2024-11-20 09:09:51.451354] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.573 [2024-11-20 09:09:51.451531] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.573 [2024-11-20 09:09:51.451542] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.573 [2024-11-20 09:09:51.451549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.573 [2024-11-20 09:09:51.451555] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.573 [2024-11-20 09:09:51.463846] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.573 [2024-11-20 09:09:51.464225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-11-20 09:09:51.464242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.573 [2024-11-20 09:09:51.464250] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.573 [2024-11-20 09:09:51.464427] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.573 [2024-11-20 09:09:51.464605] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.573 [2024-11-20 09:09:51.464613] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.573 [2024-11-20 09:09:51.464619] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.573 [2024-11-20 09:09:51.464626] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.573 09:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:35.573 09:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:25:35.573 09:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:35.573 09:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:35.573 09:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:35.573 [2024-11-20 09:09:51.476931] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.573 [2024-11-20 09:09:51.477372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-11-20 09:09:51.477389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.573 [2024-11-20 09:09:51.477397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.573 [2024-11-20 09:09:51.477574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.573 [2024-11-20 09:09:51.477750] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.573 [2024-11-20 09:09:51.477759] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.573 [2024-11-20 09:09:51.477769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.573 [2024-11-20 09:09:51.477777] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.573 [2024-11-20 09:09:51.490102] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.573 [2024-11-20 09:09:51.490383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-11-20 09:09:51.490400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.573 [2024-11-20 09:09:51.490407] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.573 [2024-11-20 09:09:51.490583] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.573 [2024-11-20 09:09:51.490765] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.573 [2024-11-20 09:09:51.490773] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.573 [2024-11-20 09:09:51.490780] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.573 [2024-11-20 09:09:51.490786] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.573 [2024-11-20 09:09:51.503267] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.573 [2024-11-20 09:09:51.503631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-11-20 09:09:51.503648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.573 [2024-11-20 09:09:51.503655] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.573 [2024-11-20 09:09:51.503831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.573 [2024-11-20 09:09:51.504011] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.573 [2024-11-20 09:09:51.504021] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.573 [2024-11-20 09:09:51.504028] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.573 [2024-11-20 09:09:51.504035] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.573 09:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:35.573 09:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:35.573 09:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.573 09:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:35.573 [2024-11-20 09:09:51.513844] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:35.573 [2024-11-20 09:09:51.516365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.573 [2024-11-20 09:09:51.516708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-11-20 09:09:51.516724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.573 [2024-11-20 09:09:51.516732] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.573 [2024-11-20 09:09:51.516908] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.573 [2024-11-20 09:09:51.517087] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.573 [2024-11-20 09:09:51.517096] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.573 [2024-11-20 09:09:51.517102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.573 [2024-11-20 09:09:51.517109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.573 09:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.573 09:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:35.573 09:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.573 09:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:35.573 [2024-11-20 09:09:51.529405] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.573 [2024-11-20 09:09:51.529832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.573 [2024-11-20 09:09:51.529849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.573 [2024-11-20 09:09:51.529857] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.573 [2024-11-20 09:09:51.530037] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.573 [2024-11-20 09:09:51.530215] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.574 [2024-11-20 09:09:51.530223] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.574 [2024-11-20 09:09:51.530230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.574 [2024-11-20 09:09:51.530236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.574 [2024-11-20 09:09:51.542533] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.574 [2024-11-20 09:09:51.542966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-11-20 09:09:51.542983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.574 [2024-11-20 09:09:51.542991] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.574 [2024-11-20 09:09:51.543167] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.574 [2024-11-20 09:09:51.543344] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.574 [2024-11-20 09:09:51.543352] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.574 [2024-11-20 09:09:51.543358] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.574 [2024-11-20 09:09:51.543364] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.574 [2024-11-20 09:09:51.555675] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.574 [2024-11-20 09:09:51.556110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-11-20 09:09:51.556127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.574 [2024-11-20 09:09:51.556134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.574 [2024-11-20 09:09:51.556311] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.574 [2024-11-20 09:09:51.556487] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.574 [2024-11-20 09:09:51.556496] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.574 [2024-11-20 09:09:51.556503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.574 [2024-11-20 09:09:51.556509] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.574 Malloc0 00:25:35.574 09:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.574 09:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:35.574 09:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.574 09:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:35.574 [2024-11-20 09:09:51.568803] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.574 [2024-11-20 09:09:51.569235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:35.574 [2024-11-20 09:09:51.569252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b7e500 with addr=10.0.0.2, port=4420 00:25:35.574 [2024-11-20 09:09:51.569260] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b7e500 is same with the state(6) to be set 00:25:35.574 [2024-11-20 09:09:51.569436] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b7e500 (9): Bad file descriptor 00:25:35.574 [2024-11-20 09:09:51.569613] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:25:35.574 [2024-11-20 09:09:51.569621] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:25:35.574 [2024-11-20 09:09:51.569629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:25:35.574 [2024-11-20 09:09:51.569635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:25:35.574 09:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.574 09:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:35.574 09:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.574 09:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:35.574 09:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.574 09:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:35.574 09:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.574 09:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:35.574 [2024-11-20 09:09:51.581607] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:35.574 [2024-11-20 09:09:51.581935] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:25:35.574 09:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.574 09:09:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2473000 00:25:35.833 [2024-11-20 09:09:51.738891] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:25:37.028 4496.57 IOPS, 17.56 MiB/s [2024-11-20T08:09:54.446Z] 5312.00 IOPS, 20.75 MiB/s [2024-11-20T08:09:55.383Z] 5956.89 IOPS, 23.27 MiB/s [2024-11-20T08:09:56.320Z] 6464.10 IOPS, 25.25 MiB/s [2024-11-20T08:09:57.257Z] 6880.64 IOPS, 26.88 MiB/s [2024-11-20T08:09:58.194Z] 7241.08 IOPS, 28.29 MiB/s [2024-11-20T08:09:59.132Z] 7538.15 IOPS, 29.45 MiB/s [2024-11-20T08:10:00.067Z] 7791.86 IOPS, 30.44 MiB/s 00:25:44.026 Latency(us) 00:25:44.026 [2024-11-20T08:10:00.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:25:44.026 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:25:44.026 Verification LBA range: start 0x0 length 0x4000 00:25:44.026 Nvme1n1 : 15.00 8012.96 31.30 13189.30 0.00 6017.52 425.63 18692.01 00:25:44.026 [2024-11-20T08:10:00.067Z] =================================================================================================================== 00:25:44.026 [2024-11-20T08:10:00.067Z] Total : 8012.96 31.30 13189.30 0.00 6017.52 425.63 18692.01 00:25:44.285 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:25:44.285 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:44.285 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:44.285 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:44.285 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:44.285 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:25:44.285 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:25:44.285 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:44.285 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@99 -- # sync 00:25:44.285 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:44.285 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # set +e 00:25:44.285 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:44.285 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:44.285 rmmod nvme_tcp 00:25:44.285 rmmod nvme_fabrics 00:25:44.285 rmmod nvme_keyring 00:25:44.285 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:44.285 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # set -e 00:25:44.285 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # return 0 00:25:44.285 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # '[' -n 2473928 ']' 00:25:44.285 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@337 -- # killprocess 2473928 00:25:44.285 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 2473928 ']' 00:25:44.285 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 2473928 00:25:44.285 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:25:44.285 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:44.285 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2473928 00:25:44.545 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:25:44.545 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:25:44.545 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2473928' 00:25:44.545 killing process with pid 2473928 00:25:44.545 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 2473928 00:25:44.545 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 2473928 00:25:44.545 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:44.545 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # nvmf_fini 00:25:44.545 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@264 -- # local dev 00:25:44.545 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@267 -- # remove_target_ns 00:25:44.545 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:44.545 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:44.545 09:10:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@268 -- # delete_main_bridge 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@130 -- # return 0 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@41 -- # _dev=0 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@41 -- # dev_map=() 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@284 -- # iptr 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@542 -- # iptables-save 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@542 -- # iptables-restore 00:25:47.082 00:25:47.082 real 0m26.199s 00:25:47.082 user 1m1.178s 00:25:47.082 sys 0m6.678s 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:25:47.082 ************************************ 00:25:47.082 END TEST nvmf_bdevperf 00:25:47.082 ************************************ 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.082 ************************************ 00:25:47.082 START TEST nvmf_target_disconnect 00:25:47.082 ************************************ 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:25:47.082 * Looking for test storage... 00:25:47.082 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:25:47.082 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:47.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.083 --rc genhtml_branch_coverage=1 00:25:47.083 --rc genhtml_function_coverage=1 00:25:47.083 --rc genhtml_legend=1 00:25:47.083 --rc geninfo_all_blocks=1 00:25:47.083 --rc geninfo_unexecuted_blocks=1 00:25:47.083 00:25:47.083 ' 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:47.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.083 --rc genhtml_branch_coverage=1 00:25:47.083 --rc genhtml_function_coverage=1 00:25:47.083 --rc genhtml_legend=1 00:25:47.083 --rc geninfo_all_blocks=1 00:25:47.083 --rc geninfo_unexecuted_blocks=1 00:25:47.083 00:25:47.083 ' 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:47.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.083 --rc genhtml_branch_coverage=1 00:25:47.083 --rc genhtml_function_coverage=1 00:25:47.083 --rc genhtml_legend=1 00:25:47.083 --rc geninfo_all_blocks=1 00:25:47.083 --rc geninfo_unexecuted_blocks=1 00:25:47.083 00:25:47.083 ' 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:47.083 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:47.083 --rc genhtml_branch_coverage=1 00:25:47.083 --rc genhtml_function_coverage=1 00:25:47.083 --rc genhtml_legend=1 00:25:47.083 --rc geninfo_all_blocks=1 00:25:47.083 --rc geninfo_unexecuted_blocks=1 00:25:47.083 00:25:47.083 ' 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@50 -- # : 0 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:25:47.083 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # remove_target_ns 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # xtrace_disable 00:25:47.083 09:10:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@131 -- # pci_devs=() 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@131 -- # local -a pci_devs 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@132 -- # pci_net_devs=() 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@133 -- # pci_drivers=() 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@133 -- # local -A pci_drivers 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@135 -- # net_devs=() 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@135 -- # local -ga net_devs 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@136 -- # e810=() 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@136 -- # local -ga e810 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@137 -- # x722=() 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@137 -- # local -ga x722 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@138 -- # mlx=() 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@138 -- # local -ga mlx 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:53.784 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:53.784 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:53.784 Found net devices under 0000:86:00.0: cvl_0_0 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:53.784 Found net devices under 0000:86:00.1: cvl_0_1 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # is_hw=yes 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@257 -- # create_target_ns 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:53.784 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@28 -- # local -g _dev 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@44 -- # ips=() 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@11 -- # local val=167772161 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:25:53.785 10.0.0.1 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@11 -- # local val=167772162 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:25:53.785 10.0.0.2 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@38 -- # ping_ips 1 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@107 -- # local dev=initiator0 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:25:53.785 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:53.786 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:53.786 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.388 ms 00:25:53.786 00:25:53.786 --- 10.0.0.1 ping statistics --- 00:25:53.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.786 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target0 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@107 -- # local dev=target0 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:25:53.786 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:53.786 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.069 ms 00:25:53.786 00:25:53.786 --- 10.0.0.2 ping statistics --- 00:25:53.786 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:53.786 rtt min/avg/max/mdev = 0.069/0.069/0.069/0.000 ms 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # (( pair++ )) 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # return 0 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@107 -- # local dev=initiator0 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@107 -- # local dev=initiator1 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # return 1 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # dev= 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@169 -- # return 0 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target0 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@107 -- # local dev=target0 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:25:53.786 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target1 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@107 -- # local dev=target1 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # return 1 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # dev= 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@169 -- # return 0 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:53.787 ************************************ 00:25:53.787 START TEST nvmf_target_disconnect_tc1 00:25:53.787 ************************************ 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:25:53.787 09:10:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:53.787 [2024-11-20 09:10:09.079711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:53.787 [2024-11-20 09:10:09.079763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x2375ab0 with addr=10.0.0.2, port=4420 00:25:53.787 [2024-11-20 09:10:09.079784] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:25:53.787 [2024-11-20 09:10:09.079793] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:25:53.787 [2024-11-20 09:10:09.079800] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:25:53.787 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:25:53.787 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:25:53.787 Initializing NVMe Controllers 00:25:53.787 09:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:25:53.787 09:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:53.787 09:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:53.787 09:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:53.787 00:25:53.787 real 0m0.118s 00:25:53.787 user 0m0.053s 00:25:53.787 sys 0m0.065s 00:25:53.787 09:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:53.787 09:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:25:53.787 ************************************ 00:25:53.787 END TEST nvmf_target_disconnect_tc1 00:25:53.787 ************************************ 00:25:53.787 09:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:25:53.787 09:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:25:53.787 09:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:53.787 09:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:53.787 ************************************ 00:25:53.787 START TEST nvmf_target_disconnect_tc2 00:25:53.787 ************************************ 00:25:53.787 09:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:25:53.787 09:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:25:53.787 09:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:53.787 09:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:53.787 09:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:53.787 09:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:53.787 09:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@328 -- # nvmfpid=2479127 00:25:53.787 09:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@329 -- # waitforlisten 2479127 00:25:53.787 09:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:53.787 09:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2479127 ']' 00:25:53.787 09:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:53.787 09:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:53.787 09:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:53.787 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:53.787 09:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:53.788 09:10:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:53.788 [2024-11-20 09:10:09.218829] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:25:53.788 [2024-11-20 09:10:09.218867] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:53.788 [2024-11-20 09:10:09.299633] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:53.788 [2024-11-20 09:10:09.339370] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:53.788 [2024-11-20 09:10:09.339412] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:53.788 [2024-11-20 09:10:09.339419] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:53.788 [2024-11-20 09:10:09.339424] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:53.788 [2024-11-20 09:10:09.339429] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:53.788 [2024-11-20 09:10:09.341091] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:25:53.788 [2024-11-20 09:10:09.341212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:25:53.788 [2024-11-20 09:10:09.341301] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:53.788 [2024-11-20 09:10:09.341302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:25:54.048 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:54.048 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:54.048 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:54.048 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:54.048 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:54.310 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:54.310 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:54.310 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.310 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:54.310 Malloc0 00:25:54.310 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.310 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:54.310 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.310 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:54.310 [2024-11-20 09:10:10.138131] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:54.310 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.310 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:54.310 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.310 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:54.310 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.310 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:54.310 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.310 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:54.310 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.310 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:54.310 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.310 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:54.310 [2024-11-20 09:10:10.170403] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:54.310 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.310 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:54.310 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:54.310 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:54.310 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:54.310 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2479376 00:25:54.310 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:25:54.310 09:10:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:25:56.219 09:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2479127 00:25:56.220 09:10:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 [2024-11-20 09:10:12.205624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 [2024-11-20 09:10:12.205836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 [2024-11-20 09:10:12.206038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Read completed with error (sct=0, sc=8) 00:25:56.220 starting I/O failed 00:25:56.220 Write completed with error (sct=0, sc=8) 00:25:56.221 starting I/O failed 00:25:56.221 Read completed with error (sct=0, sc=8) 00:25:56.221 starting I/O failed 00:25:56.221 Read completed with error (sct=0, sc=8) 00:25:56.221 starting I/O failed 00:25:56.221 Read completed with error (sct=0, sc=8) 00:25:56.221 starting I/O failed 00:25:56.221 Write completed with error (sct=0, sc=8) 00:25:56.221 starting I/O failed 00:25:56.221 Write completed with error (sct=0, sc=8) 00:25:56.221 starting I/O failed 00:25:56.221 Write completed with error (sct=0, sc=8) 00:25:56.221 starting I/O failed 00:25:56.221 Read completed with error (sct=0, sc=8) 00:25:56.221 starting I/O failed 00:25:56.221 Read completed with error (sct=0, sc=8) 00:25:56.221 starting I/O failed 00:25:56.221 Read completed with error (sct=0, sc=8) 00:25:56.221 starting I/O failed 00:25:56.221 Read completed with error (sct=0, sc=8) 00:25:56.221 starting I/O failed 00:25:56.221 Write completed with error (sct=0, sc=8) 00:25:56.221 starting I/O failed 00:25:56.221 Read completed with error (sct=0, sc=8) 00:25:56.221 starting I/O failed 00:25:56.221 Write completed with error (sct=0, sc=8) 00:25:56.221 starting I/O failed 00:25:56.221 Write completed with error (sct=0, sc=8) 00:25:56.221 starting I/O failed 00:25:56.221 Write completed with error (sct=0, sc=8) 00:25:56.221 starting I/O failed 00:25:56.221 Write completed with error (sct=0, sc=8) 00:25:56.221 starting I/O failed 00:25:56.221 Read completed with error (sct=0, sc=8) 00:25:56.221 starting I/O failed 00:25:56.221 Read completed with error (sct=0, sc=8) 00:25:56.221 starting I/O failed 00:25:56.221 Read completed with error (sct=0, sc=8) 00:25:56.221 starting I/O failed 00:25:56.221 Read completed with error (sct=0, sc=8) 00:25:56.221 starting I/O failed 00:25:56.221 [2024-11-20 09:10:12.206244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:25:56.221 [2024-11-20 09:10:12.206374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-11-20 09:10:12.206396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.221 qpair failed and we were unable to recover it. 00:25:56.221 [2024-11-20 09:10:12.206633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-11-20 09:10:12.206643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.221 qpair failed and we were unable to recover it. 00:25:56.221 [2024-11-20 09:10:12.206828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-11-20 09:10:12.206848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.221 qpair failed and we were unable to recover it. 00:25:56.221 [2024-11-20 09:10:12.207093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-11-20 09:10:12.207121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.221 qpair failed and we were unable to recover it. 00:25:56.221 [2024-11-20 09:10:12.207331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-11-20 09:10:12.207364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.221 qpair failed and we were unable to recover it. 00:25:56.221 [2024-11-20 09:10:12.207565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-11-20 09:10:12.207599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.221 qpair failed and we were unable to recover it. 00:25:56.221 [2024-11-20 09:10:12.207789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-11-20 09:10:12.207801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.221 qpair failed and we were unable to recover it. 00:25:56.221 [2024-11-20 09:10:12.208031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-11-20 09:10:12.208043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.221 qpair failed and we were unable to recover it. 00:25:56.221 [2024-11-20 09:10:12.208142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-11-20 09:10:12.208178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.221 qpair failed and we were unable to recover it. 00:25:56.221 [2024-11-20 09:10:12.208349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-11-20 09:10:12.208375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.221 qpair failed and we were unable to recover it. 00:25:56.221 [2024-11-20 09:10:12.208494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-11-20 09:10:12.208520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.221 qpair failed and we were unable to recover it. 00:25:56.221 [2024-11-20 09:10:12.208772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-11-20 09:10:12.208805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.221 qpair failed and we were unable to recover it. 00:25:56.221 [2024-11-20 09:10:12.209098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-11-20 09:10:12.209132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.221 qpair failed and we were unable to recover it. 00:25:56.221 [2024-11-20 09:10:12.209401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-11-20 09:10:12.209434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.221 qpair failed and we were unable to recover it. 00:25:56.221 [2024-11-20 09:10:12.209678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-11-20 09:10:12.209711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.221 qpair failed and we were unable to recover it. 00:25:56.221 [2024-11-20 09:10:12.210004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-11-20 09:10:12.210046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.221 qpair failed and we were unable to recover it. 00:25:56.221 [2024-11-20 09:10:12.210310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-11-20 09:10:12.210343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.221 qpair failed and we were unable to recover it. 00:25:56.221 [2024-11-20 09:10:12.210558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-11-20 09:10:12.210591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.221 qpair failed and we were unable to recover it. 00:25:56.221 [2024-11-20 09:10:12.210857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-11-20 09:10:12.210891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.221 qpair failed and we were unable to recover it. 00:25:56.221 [2024-11-20 09:10:12.211078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-11-20 09:10:12.211112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.221 qpair failed and we were unable to recover it. 00:25:56.221 [2024-11-20 09:10:12.211365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-11-20 09:10:12.211399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.221 qpair failed and we were unable to recover it. 00:25:56.221 [2024-11-20 09:10:12.211614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-11-20 09:10:12.211639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.221 qpair failed and we were unable to recover it. 00:25:56.221 [2024-11-20 09:10:12.211812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-11-20 09:10:12.211838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.221 qpair failed and we were unable to recover it. 00:25:56.221 [2024-11-20 09:10:12.212077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-11-20 09:10:12.212104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.221 qpair failed and we were unable to recover it. 00:25:56.221 [2024-11-20 09:10:12.212281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-11-20 09:10:12.212307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.221 qpair failed and we were unable to recover it. 00:25:56.221 [2024-11-20 09:10:12.212515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-11-20 09:10:12.212548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.221 qpair failed and we were unable to recover it. 00:25:56.221 [2024-11-20 09:10:12.212672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-11-20 09:10:12.212705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.221 qpair failed and we were unable to recover it. 00:25:56.221 [2024-11-20 09:10:12.212938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-11-20 09:10:12.212980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.221 qpair failed and we were unable to recover it. 00:25:56.221 [2024-11-20 09:10:12.213222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-11-20 09:10:12.213256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.221 qpair failed and we were unable to recover it. 00:25:56.221 [2024-11-20 09:10:12.213504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.221 [2024-11-20 09:10:12.213538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.221 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.213715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.213747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.213940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.213973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.214174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.214200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.214304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.214329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.214583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.214609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.214763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.214789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.214961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.214987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.215153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.215178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.215356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.215382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.215634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.215659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.215846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.215872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.216069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.216097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.216263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.216336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.216550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.216604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.216882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.216918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.219048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.219086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.219349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.219382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.219593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.219627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.219765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.219798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.220065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.220100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.220273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.220306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.220508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.220558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.220767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.220800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.221054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.221102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.221382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.221405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.221640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.221668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.221915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.221936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.222100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.222121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.222233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.222253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.222402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.222423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.222572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.222592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.222820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.222853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.223046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.223080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.223299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.223331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.223517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.223548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.223719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.223752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.223915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.223960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.222 [2024-11-20 09:10:12.224148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.222 [2024-11-20 09:10:12.224180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.222 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.224313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.224345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.224548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.224579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.224702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.224734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.224883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.224904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.225002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.225022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.225137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.225157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.225314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.225334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.225434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.225455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.225560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.225580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.225681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.225702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.225805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.225826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.226063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.226085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.226165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.226184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.226284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.226307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.226525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.226595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.226739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.226777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.226902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.226959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.227134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.227157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.227307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.227328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.227419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.227438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.227529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.227549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.227656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.227677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.227836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.227857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.227941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.227967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.228116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.228136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.228218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.228238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.228327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.228347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.228557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.228582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.228657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.228676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.228784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.228804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.228957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.228979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.229155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.229187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.229300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.229333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.229599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.229631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.229810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.229831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.229919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.229938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.230109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.230130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.223 [2024-11-20 09:10:12.230285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.223 [2024-11-20 09:10:12.230306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.223 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.230522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.230555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.224 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.230758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.230790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.224 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.230897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.230929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.224 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.231063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.231095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.224 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.231312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.231345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.224 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.231522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.231543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.224 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.231706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.231738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.224 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.231840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.231872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.224 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.232002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.232035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.224 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.232213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.232244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.224 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.232531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.232564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.224 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.232688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.232721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.224 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.232841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.232876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.224 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.233042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.233064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.224 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.233231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.233251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.224 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.233399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.233420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.224 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.233526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.233550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.224 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.233706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.233727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.224 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.233912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.233934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.224 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.234034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.234053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.224 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.234208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.234228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.224 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.234322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.234342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.224 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.234415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.234434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.224 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.234648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.234669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.224 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.234767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.234788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.224 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.235005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.235025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.224 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.235098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.235117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.224 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.235258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.235278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.224 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.235425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.235446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.224 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.235536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.235555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.224 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.235723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.235743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.224 qpair failed and we were unable to recover it. 00:25:56.224 [2024-11-20 09:10:12.235890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.224 [2024-11-20 09:10:12.235910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.236165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.236188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.236287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.236308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.236493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.236524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.236653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.236684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.236808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.236842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.237009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.237043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.237245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.237277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.237401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.237433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.237597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.237617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.237711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.237731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.237838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.237859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.238004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.238025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.238170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.238190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.238267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.238286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.238378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.238398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.238510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.238530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.238612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.238634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.238719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.238739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.238883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.238905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.239008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.239029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.239124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.239144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.239311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.239332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.239478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.239498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.239607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.239628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.239791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.239829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.239984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.240017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.240200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.240231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.240362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.240394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.240516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.240536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.240640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.240660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.240755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.240774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.240946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.240977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.241062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.241081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.241317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.241338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.241448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.241469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.241636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.225 [2024-11-20 09:10:12.241656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.225 qpair failed and we were unable to recover it. 00:25:56.225 [2024-11-20 09:10:12.241736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.241756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.241834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.241853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.241955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.241977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.242073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.242094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.242201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.242222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.242324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.242345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.242442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.242463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.242617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.242638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.242718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.242739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.242829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.242849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.242995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.243017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.243231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.243251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.243382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.243403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.243563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.243583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.243857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.243878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.244082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.244104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.244259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.244279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.244423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.244463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.244720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.244751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.244875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.244907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.245021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.245054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.245223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.245255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.245491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.245522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.245702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.245723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.245821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.245841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.245988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.246009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.246101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.246121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.246250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.246270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.246432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.246456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.246621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.246641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.246854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.246874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.246972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.246993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.247155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.247176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.247325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.247345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.247509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.247529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.247740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.247760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.226 [2024-11-20 09:10:12.247868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.226 [2024-11-20 09:10:12.247888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.226 qpair failed and we were unable to recover it. 00:25:56.227 [2024-11-20 09:10:12.248105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-11-20 09:10:12.248126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.227 qpair failed and we were unable to recover it. 00:25:56.227 [2024-11-20 09:10:12.248218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-11-20 09:10:12.248240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.227 qpair failed and we were unable to recover it. 00:25:56.227 [2024-11-20 09:10:12.248429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-11-20 09:10:12.248454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.227 qpair failed and we were unable to recover it. 00:25:56.227 [2024-11-20 09:10:12.248621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-11-20 09:10:12.248646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.227 qpair failed and we were unable to recover it. 00:25:56.227 [2024-11-20 09:10:12.248762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-11-20 09:10:12.248783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.227 qpair failed and we were unable to recover it. 00:25:56.227 [2024-11-20 09:10:12.248975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-11-20 09:10:12.248995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.227 qpair failed and we were unable to recover it. 00:25:56.227 [2024-11-20 09:10:12.249255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-11-20 09:10:12.249276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.227 qpair failed and we were unable to recover it. 00:25:56.227 [2024-11-20 09:10:12.249453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-11-20 09:10:12.249477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.227 qpair failed and we were unable to recover it. 00:25:56.227 [2024-11-20 09:10:12.249588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-11-20 09:10:12.249609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.227 qpair failed and we were unable to recover it. 00:25:56.227 [2024-11-20 09:10:12.249694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-11-20 09:10:12.249714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.227 qpair failed and we were unable to recover it. 00:25:56.227 [2024-11-20 09:10:12.249812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-11-20 09:10:12.249834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.227 qpair failed and we were unable to recover it. 00:25:56.227 [2024-11-20 09:10:12.249925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-11-20 09:10:12.249945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.227 qpair failed and we were unable to recover it. 00:25:56.227 [2024-11-20 09:10:12.250036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-11-20 09:10:12.250057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.227 qpair failed and we were unable to recover it. 00:25:56.227 [2024-11-20 09:10:12.250150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-11-20 09:10:12.250170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.227 qpair failed and we were unable to recover it. 00:25:56.227 [2024-11-20 09:10:12.250327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-11-20 09:10:12.250348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.227 qpair failed and we were unable to recover it. 00:25:56.227 [2024-11-20 09:10:12.250491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-11-20 09:10:12.250512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.227 qpair failed and we were unable to recover it. 00:25:56.227 [2024-11-20 09:10:12.250730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-11-20 09:10:12.250756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.227 qpair failed and we were unable to recover it. 00:25:56.227 [2024-11-20 09:10:12.250910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-11-20 09:10:12.250932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.227 qpair failed and we were unable to recover it. 00:25:56.227 [2024-11-20 09:10:12.251029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-11-20 09:10:12.251052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.227 qpair failed and we were unable to recover it. 00:25:56.227 [2024-11-20 09:10:12.251225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-11-20 09:10:12.251245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.227 qpair failed and we were unable to recover it. 00:25:56.227 [2024-11-20 09:10:12.251354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-11-20 09:10:12.251374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.227 qpair failed and we were unable to recover it. 00:25:56.227 [2024-11-20 09:10:12.251545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-11-20 09:10:12.251568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.227 qpair failed and we were unable to recover it. 00:25:56.227 [2024-11-20 09:10:12.251720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-11-20 09:10:12.251753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.227 qpair failed and we were unable to recover it. 00:25:56.227 [2024-11-20 09:10:12.251925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-11-20 09:10:12.251987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.227 qpair failed and we were unable to recover it. 00:25:56.227 [2024-11-20 09:10:12.252157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-11-20 09:10:12.252190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.227 qpair failed and we were unable to recover it. 00:25:56.227 [2024-11-20 09:10:12.252363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-11-20 09:10:12.252395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.227 qpair failed and we were unable to recover it. 00:25:56.227 [2024-11-20 09:10:12.252696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-11-20 09:10:12.252720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.227 qpair failed and we were unable to recover it. 00:25:56.227 [2024-11-20 09:10:12.252878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-11-20 09:10:12.252900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.227 qpair failed and we were unable to recover it. 00:25:56.227 [2024-11-20 09:10:12.253118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-11-20 09:10:12.253141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.227 qpair failed and we were unable to recover it. 00:25:56.227 [2024-11-20 09:10:12.253243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.227 [2024-11-20 09:10:12.253265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.227 qpair failed and we were unable to recover it. 00:25:56.510 [2024-11-20 09:10:12.253362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.510 [2024-11-20 09:10:12.253383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.253651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.253686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.253976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.254009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.254240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.254265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.254465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.254486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.254660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.254680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.254917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.254959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.255221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.255252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.255435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.255467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.255749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.255781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.256059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.256080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.256294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.256313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.256554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.256574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.256829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.256849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.257066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.257087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.257255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.257276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.257481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.257513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.257769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.257800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.258088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.258122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.258364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.258397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.258577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.258609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.258794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.258815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.259031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.259065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.259255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.259286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.259529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.259561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.259769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.259790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.260019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.260040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.260206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.260227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.260320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.260339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.260501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.260522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.260737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.260769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.261042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.261075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.261298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.261330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.261532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.261564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.261756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.261787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.261972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.261993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.511 [2024-11-20 09:10:12.262228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.511 [2024-11-20 09:10:12.262249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.511 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.262420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.262440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.262712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.262745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.262934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.262975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.263170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.263201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.263466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.263505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.263777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.263809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.263998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.264019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.264231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.264252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.264365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.264385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.264538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.264559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.264792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.264813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.264983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.265005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.265170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.265192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.265304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.265324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.265415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.265434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.265588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.265608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.265829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.265861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.266044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.266077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.266263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.266295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.266585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.266617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.266869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.266901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.267075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.267096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.267245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.267265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.267415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.267436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.267579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.267599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.267747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.267776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.267899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.267920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.268125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.268146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.268381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.268404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.268525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.268548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.268797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.268821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.268989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.269011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.269201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.269222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.269388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.269412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.269584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.269616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.269742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.269775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.269974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.270008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.512 [2024-11-20 09:10:12.270123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.512 [2024-11-20 09:10:12.270154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.512 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.270419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.270451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.270721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.270753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.271023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.271057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.271274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.271306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.271488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.271520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.271706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.271737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.271920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.271969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.272074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.272093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.272255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.272275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.272459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.272479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.272663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.272683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.272764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.272782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.272993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.273015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.273227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.273247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.273410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.273431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.273616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.273649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.273831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.273863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.274101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.274135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.274371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.274403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.274707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.274742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.275000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.275041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.275143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.275164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.275263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.275283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.275447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.275467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.275645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.275666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.275842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.275862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.276088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.276109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.276273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.276293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.276502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.276522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.276762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.276795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.276991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.277025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.277283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.277316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.277450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.277482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.277730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.277762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.277905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.277966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.278094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.278127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.278388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.513 [2024-11-20 09:10:12.278420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.513 qpair failed and we were unable to recover it. 00:25:56.513 [2024-11-20 09:10:12.278683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.278715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.278897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.278917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.279172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.279205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.279351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.279383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.279576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.279608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.279788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.279807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.279980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.280021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.280230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.280262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.280394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.280427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.280691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.280715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.280895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.280918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.281105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.281127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.281302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.281323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.281426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.281448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.281613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.281635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.281820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.281854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.282094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.282128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.282309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.282342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.282600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.282633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.282913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.282945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.283167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.283198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.283467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.283500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.283693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.283724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.283907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.283940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.284212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.284244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.284426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.284459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.284697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.284729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.284917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.284958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.285084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.285118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.285357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.285390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.285626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.285659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.285848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.285889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.286052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.286074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.286223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.286243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.286338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.286359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.286578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.286610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.286899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.286984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.514 [2024-11-20 09:10:12.287251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.514 [2024-11-20 09:10:12.287288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.514 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.287480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.515 [2024-11-20 09:10:12.287514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.515 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.287771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.515 [2024-11-20 09:10:12.287804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.515 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.288023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.515 [2024-11-20 09:10:12.288057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.515 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.288178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.515 [2024-11-20 09:10:12.288211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.515 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.288413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.515 [2024-11-20 09:10:12.288436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.515 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.288688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.515 [2024-11-20 09:10:12.288709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.515 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.288953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.515 [2024-11-20 09:10:12.288975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.515 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.289199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.515 [2024-11-20 09:10:12.289220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.515 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.289336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.515 [2024-11-20 09:10:12.289356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.515 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.289458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.515 [2024-11-20 09:10:12.289479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.515 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.289633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.515 [2024-11-20 09:10:12.289653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.515 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.289828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.515 [2024-11-20 09:10:12.289852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.515 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.290013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.515 [2024-11-20 09:10:12.290034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.515 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.290215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.515 [2024-11-20 09:10:12.290245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.515 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.290420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.515 [2024-11-20 09:10:12.290454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.515 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.290694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.515 [2024-11-20 09:10:12.290727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.515 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.290904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.515 [2024-11-20 09:10:12.290925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.515 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.291159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.515 [2024-11-20 09:10:12.291181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.515 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.291345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.515 [2024-11-20 09:10:12.291365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.515 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.291591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.515 [2024-11-20 09:10:12.291624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.515 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.291832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.515 [2024-11-20 09:10:12.291864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.515 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.292104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.515 [2024-11-20 09:10:12.292138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.515 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.292275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.515 [2024-11-20 09:10:12.292306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.515 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.292431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.515 [2024-11-20 09:10:12.292463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.515 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.292729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.515 [2024-11-20 09:10:12.292760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.515 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.293060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.515 [2024-11-20 09:10:12.293095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.515 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.293286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.515 [2024-11-20 09:10:12.293316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.515 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.293506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.515 [2024-11-20 09:10:12.293538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.515 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.293721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.515 [2024-11-20 09:10:12.293740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.515 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.293978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.515 [2024-11-20 09:10:12.293999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.515 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.294165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.515 [2024-11-20 09:10:12.294185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.515 qpair failed and we were unable to recover it. 00:25:56.515 [2024-11-20 09:10:12.294385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.294416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.294667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.294697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.294818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.294849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.295049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.295082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.295225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.295257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.295445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.295476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.295744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.295776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.296092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.296142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.296373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.296396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.296512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.296533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.296732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.296753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.296911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.296932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.297130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.297152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.297311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.297332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.297538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.297569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.297808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.297841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.298104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.298137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.298419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.298440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.298557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.298578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.298751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.298771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.299011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.299033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.299213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.299234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.299345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.299365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.299512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.299533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.299763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.299784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.299884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.299904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.300121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.300142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.300384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.300405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.300682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.300703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.300849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.300870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.301142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.301164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.301404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.301425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.301523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.301543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.301703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.301724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.302005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.302029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.302245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.302266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.302500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.302521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.302760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.516 [2024-11-20 09:10:12.302780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.516 qpair failed and we were unable to recover it. 00:25:56.516 [2024-11-20 09:10:12.302942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.302971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.303227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.303259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.303450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.303482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.303674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.303706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.303857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.303889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.304106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.304140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.304399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.304431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.304643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.304676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.304924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.304944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.305150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.305172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.305349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.305369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.305527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.305548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.305744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.305764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.305967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.305989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.306234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.306254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.306516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.306537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.306631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.306651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.306760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.306780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.306884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.306904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.307003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.307023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.307181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.307201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.307385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.307418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.307657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.307689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.307945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.308004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.308195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.308227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.308412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.308444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.308734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.308766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.308964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.309002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.309189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.309220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.309413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.309445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.309627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.309659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.309841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.309873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.310057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.310091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.310335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.310366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.310618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.310650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.310818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.310838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.311066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.311091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.311194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.517 [2024-11-20 09:10:12.311214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.517 qpair failed and we were unable to recover it. 00:25:56.517 [2024-11-20 09:10:12.311332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.311353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.311625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.311645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.311919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.311939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.312113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.312134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.312305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.312336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.312472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.312504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.312769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.312801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.313048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.313081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.313218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.313249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.313364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.313395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.313532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.313563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.313744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.313776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.314070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.314104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.314297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.314330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.314446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.314478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.314697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.314730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.314856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.314876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.315075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.315096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.315218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.315238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.315502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.315541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.315752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.315783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.315968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.316001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.316170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.316191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.316375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.316407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.316621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.316651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.316838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.316880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.317039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.317060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.317161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.317182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.317351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.317371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.317590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.317622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.317750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.317782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.318036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.318069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.318258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.318279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.318398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.318418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.318628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.318649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.318885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.318906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.319088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.319109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.319290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.518 [2024-11-20 09:10:12.319322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.518 qpair failed and we were unable to recover it. 00:25:56.518 [2024-11-20 09:10:12.319537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.319575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.319774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.319805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.320066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.320086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.320263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.320283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.320385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.320405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.320515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.320535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.320718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.320739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.320913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.320933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.321175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.321207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.321400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.321431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.321734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.321766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.321964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.321985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.322202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.322222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.322336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.322356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.322615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.322636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.322798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.322818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.323034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.323057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.323173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.323193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.323308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.323328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.323595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.323616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.323846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.323866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.324018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.324039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.324209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.324230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.324420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.324451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.324664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.324696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.324905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.324937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.325146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.325166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.325357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.325378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.325586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.325606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.325775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.325796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.325963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.326006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.326149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.326180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.326306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.326337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.326552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.326585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.326766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.326786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.326971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.327005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.327179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.327211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.327421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.519 [2024-11-20 09:10:12.327452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.519 qpair failed and we were unable to recover it. 00:25:56.519 [2024-11-20 09:10:12.327642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.327674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.327942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.327988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.328117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.328161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.328351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.328371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.328582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.328603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.328784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.328805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.328965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.328987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.329208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.329248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.329443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.329475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.329668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.329700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.329917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.329960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.330151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.330184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.330327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.330361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.330604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.330636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.330880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.330901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.331154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.331176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.331288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.331308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.331468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.331489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.331690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.331711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.331861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.331881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.332104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.332126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.332324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.332344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.332427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.332447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.332626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.332647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.332885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.332906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.333099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.333121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.333289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.333309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.333411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.333432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.333706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.333727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.333959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.333981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.334203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.334224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.334386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.334407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.334681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.520 [2024-11-20 09:10:12.334730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.520 qpair failed and we were unable to recover it. 00:25:56.520 [2024-11-20 09:10:12.334883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.334915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.335130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.335163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.335397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.335418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.335523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.335544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.335731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.335763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.336052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.336086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.336233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.336265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.336412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.336444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.336549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.336582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.336713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.336751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.336945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.337000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.337196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.337216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.337378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.337399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.337521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.337542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.337716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.337737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.337904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.337925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.338139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.338173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.338380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.338412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.338549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.338581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.338766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.338796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.339044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.339079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.339254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.339285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.339426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.339447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.339558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.339578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.339759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.339780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.339945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.339975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.340090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.340111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.340298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.340319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.340416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.340438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.340608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.340628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.340810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.340830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.341072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.341093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.341274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.341295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.341407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.341428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.341551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.341571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.341741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.341762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.341943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.341971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.342164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.521 [2024-11-20 09:10:12.342185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.521 qpair failed and we were unable to recover it. 00:25:56.521 [2024-11-20 09:10:12.342355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.342376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.342616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.342648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.342792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.342812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.343083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.343105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.343210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.343230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.343343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.343364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.343532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.343553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.343815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.343836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.343958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.343979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.344225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.344246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.344498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.344535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.344742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.344781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.344900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.344932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.345132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.345177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.345352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.345373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.345606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.345637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.345846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.345878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.346094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.346116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.346228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.346248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.346468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.346488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.346715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.346735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.346920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.346940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.347150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.347170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.347283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.347303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.347471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.347492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.347751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.347771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.348014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.348035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.348226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.348246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.348364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.348384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.348545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.348565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.348655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.348674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.348771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.348792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.348942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.348970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.349120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.349140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.349241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.349263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.349377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.349398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.349597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.349618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.522 [2024-11-20 09:10:12.349842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.522 [2024-11-20 09:10:12.349863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.522 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.350069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.350102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.350294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.350325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.350505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.350536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.350810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.350842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.351141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.351185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.351357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.351378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.351552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.351573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.351819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.351861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.352080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.352114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.352303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.352335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.352588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.352619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.352861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.352893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.353142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.353175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.353321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.353345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.353454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.353474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.353759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.353779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.354032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.354073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.354274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.354305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.354499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.354531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.354764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.354796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.355039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.355073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.355203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.355223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.355393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.355414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.355542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.355562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.355761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.355781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.355981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.356015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.356135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.356167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.356428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.356459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.356735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.356766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.356972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.357006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.357208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.357228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.357425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.357446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.357775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.357795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.357993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.358026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.358204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.358235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.358491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.358523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.358835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.523 [2024-11-20 09:10:12.358867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.523 qpair failed and we were unable to recover it. 00:25:56.523 [2024-11-20 09:10:12.359163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.359185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.359451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.359471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.359708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.359728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.359849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.359870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.360048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.360071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.360245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.360266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.360498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.360520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.360604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.360622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.360790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.360812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.360993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.361015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.361180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.361201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.361328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.361349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.361466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.361487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.361659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.361680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.361919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.361959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.362166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.362200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.362394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.362431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.362688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.362720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.362993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.363015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.363265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.363287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.363521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.363542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.363787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.363808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.364058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.364080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.364297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.364319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.364471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.364492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.364665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.364686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.364929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.364958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.365205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.365226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.365382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.365403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.365519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.365539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.365661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.365681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.365909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.365930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.366211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.366244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.524 [2024-11-20 09:10:12.366451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.524 [2024-11-20 09:10:12.366483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.524 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.366680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.366712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.366991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.367026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.367318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.367339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.367531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.367552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.367777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.367798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.367980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.368013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.368229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.368262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.368454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.368486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.368759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.368791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.368994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.369027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.369215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.369236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.369475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.369497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.369771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.369792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.370038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.370060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.370243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.370263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.370436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.370457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.370688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.370728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.370984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.371017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.371230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.371262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.371514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.371546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.371842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.371875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.372017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.372050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.372230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.372256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.372437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.372458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.372737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.372758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.373006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.373028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.373143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.373164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.373370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.373391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.373618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.373639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.373885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.373906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.374027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.374049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.374167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.374188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.374291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.374311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.374440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.374461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.374696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.374717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.374976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.374998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.375232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.375253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.525 [2024-11-20 09:10:12.375374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.525 [2024-11-20 09:10:12.375395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.525 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.375652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.375686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.375883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.375914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.376130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.376163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.376375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.376396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.376584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.376605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.376793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.376814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.376994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.377017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.377217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.377238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.377467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.377489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.377713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.377734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.377941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.377970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.378207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.378228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.378419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.378440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.378663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.378696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.378971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.379004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.379206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.379239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.379378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.379410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.379605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.379639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.379837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.379858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.380048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.380071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.380253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.380274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.380450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.380481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.380789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.380821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.381079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.381101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.381328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.381354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.381592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.381612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.381833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.381854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.382128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.382150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.382389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.382409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.382584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.382605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.382874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.382906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.383112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.383145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.383342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.383375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.383517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.383548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.383746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.383779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.383971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.384005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.384220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.384241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.526 qpair failed and we were unable to recover it. 00:25:56.526 [2024-11-20 09:10:12.384404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.526 [2024-11-20 09:10:12.384445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.384687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.384719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.384972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.385017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.385263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.385284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.385537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.385558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.385673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.385693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.385799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.385820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.386089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.386111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.386312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.386332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.386507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.386528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.386759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.386779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.387029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.387052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.387222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.387243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.387356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.387377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.387663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.387723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.388023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.388064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.388227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.388261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.388516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.388549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.388754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.388786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.388921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.388942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.389143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.389165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.389336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.389382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.389529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.389561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.389783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.389815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.390121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.390143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.390269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.390291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.390398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.390419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.390538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.390558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.390791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.390814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.391022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.391043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.391164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.391184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.391297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.391319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.391547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.391568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.391743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.391764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.391993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.392027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.392231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.392264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.392419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.392451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.392772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.392805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.527 [2024-11-20 09:10:12.393049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.527 [2024-11-20 09:10:12.393071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.527 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.393192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.393214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.393458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.393479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.393596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.393621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.393788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.393809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.394089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.394110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.394283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.394316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.394566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.394598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.394899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.394932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.395229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.395263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.395455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.395488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.395688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.395719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.395994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.396017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.396176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.396197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.396304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.396325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.396492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.396513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.396701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.396723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.396897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.396917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.397139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.397161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.397287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.397308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.397468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.397490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.397685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.397705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.397933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.397965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.398195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.398215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.398441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.398463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.398661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.398682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.398921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.398943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.399053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.399074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.399254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.399276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.399505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.399525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.399755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.399781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.400062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.400085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.400195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.400217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.400389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.400411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.400530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.400551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.400786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.400819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.401043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.401077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.401279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.401310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.528 [2024-11-20 09:10:12.401457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.528 [2024-11-20 09:10:12.401489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.528 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.401809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.401842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.402060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.402094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.402276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.402298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.402413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.402433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.402620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.402641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.402871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.402893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.403093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.403116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.403313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.403334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.403469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.403491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.403647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.403668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.403762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.403783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.403993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.404014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.404327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.404361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.404492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.404524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.404720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.404753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.405010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.405044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.405247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.405279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.405539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.405571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.405702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.405741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.406017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.406068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.406222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.406254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.406447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.406468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.406718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.406739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.406976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.406998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.407172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.407194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.407310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.407331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.407565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.407586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.407761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.407782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.407968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.407990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.408105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.408126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.408234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.408256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.408423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.408444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.408565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.408586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.529 [2024-11-20 09:10:12.408764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.529 [2024-11-20 09:10:12.408785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.529 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.409016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.409038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.409233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.409254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.409369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.409390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.409674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.409695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.409923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.409945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.410063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.410083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.410201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.410222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.410382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.410403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.410528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.410549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.410807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.410829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.411031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.411054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.411227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.411248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.411436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.411468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.411685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.411717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.411922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.411967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.412156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.412189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.412443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.412476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.412786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.412817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.413103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.413138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.413383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.413404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.413599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.413620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.413847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.413868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.414033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.414080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.414283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.414316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.414550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.414583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.414913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.415013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.415317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.415355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.415556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.415591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.415777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.415809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.415998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.416033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.416182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.416215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.416429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.416461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.416685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.416717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.416883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.416916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.417128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.417162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.417438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.417472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.417689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.417722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.418008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.530 [2024-11-20 09:10:12.418042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.530 qpair failed and we were unable to recover it. 00:25:56.530 [2024-11-20 09:10:12.418166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.418208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.418401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.418434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.418724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.418756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.418963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.418996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.419202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.419235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.419507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.419539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.419826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.419860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.420070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.420106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.420300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.420332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.420535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.420567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.420822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.420854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.420994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.421028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.421226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.421260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.421458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.421491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.421716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.421748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.422057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.422091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.422287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.422320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.422521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.422554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.422829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.422862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.423149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.423184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.423463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.423496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.423722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.423755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.423970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.424005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.424210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.424243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.424383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.424416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.424696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.424729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.424926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.424981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.425168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.425213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.425421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.425454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.425730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.425763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.425959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.425994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.426208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.426241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.426515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.426548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.426828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.426861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.427041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.427074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.427277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.427311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.427593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.427627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.531 [2024-11-20 09:10:12.427814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.531 [2024-11-20 09:10:12.427847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.531 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.428039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.428072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.428296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.428329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.428516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.428548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.428792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.428825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.429033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.429066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.429215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.429248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.429445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.429478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.429699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.429732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.429982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.430015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.430219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.430251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.430443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.430475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.430758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.430792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.431094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.431128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.431340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.431373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.431642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.431674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.431869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.431902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.432164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.432198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.432429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.432462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.432746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.432778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.432979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.433014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.433139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.433172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.433397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.433429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.433646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.433679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.433879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.433912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.434166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.434199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.434344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.434377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.434597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.434631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.434929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.434971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.435227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.435260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.435398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.435437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.435589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.435621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.435893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.435927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.436193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.436226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.436431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.436464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.436827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.436861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.437061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.437096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.437265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.437298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.532 qpair failed and we were unable to recover it. 00:25:56.532 [2024-11-20 09:10:12.437433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.532 [2024-11-20 09:10:12.437465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.437614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.437647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.437844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.437877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.438140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.438174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.438303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.438336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.438565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.438598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.438807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.438840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.439035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.439069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.439227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.439260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.439391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.439423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.439792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.439824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.440074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.440108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.440312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.440344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.440468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.440500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.440655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.440688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.440926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.440984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.441129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.441162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.441356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.441389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.441691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.441724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.441987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.442021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.442161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.442193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.442446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.442480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.442715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.442747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.442976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.443012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.443202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.443234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.443444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.443477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.443763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.443796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.444033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.444066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.444282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.444314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.444509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.444543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.444668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.444700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.444887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.444919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.445147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.445184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.445409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.533 [2024-11-20 09:10:12.445440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.533 qpair failed and we were unable to recover it. 00:25:56.533 [2024-11-20 09:10:12.445681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.445715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.445992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.446027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.446225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.446258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.446397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.446430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.446643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.446675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.446896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.446927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.447149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.447181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.447405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.447439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.447741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.447773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.448000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.448035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.448242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.448275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.448472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.448506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.448826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.448858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.449103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.449136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.449349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.449382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.449581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.449613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.449805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.449837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.450055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.450089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.450363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.450397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.450552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.450584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.450840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.450872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.451098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.451132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.451386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.451418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.451554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.451586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.451861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.451893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.452133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.452167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.452313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.452345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.452564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.452597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.452895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.452928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.453214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.453246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.453467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.453499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.453759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.453792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.454035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.454070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.454215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.454247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.454401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.454435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.454704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.454736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.455022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.455060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.534 [2024-11-20 09:10:12.455281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.534 [2024-11-20 09:10:12.455312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.534 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.455565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.455603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.455794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.455826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.456109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.456145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.456373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.456405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.458047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.458110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.458293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.458326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.458592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.458625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.458924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.458973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.459255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.459287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.459491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.459524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.459677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.459711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.459858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.459891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.460118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.460152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.460341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.460376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.460616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.460649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.460803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.460834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.461037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.461071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.461257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.461290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.461437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.461470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.461685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.461720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.461917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.461959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.462107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.462140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.462348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.462382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.462602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.462636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.462884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.462917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.463138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.463174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.463410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.463442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.463636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.463673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.463961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.464003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.464137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.464171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.464381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.464417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.464767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.464801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.465009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.465044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.465255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.465287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.465430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.465462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.465608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.465642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.465830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.465864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.535 [2024-11-20 09:10:12.466047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.535 [2024-11-20 09:10:12.466081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.535 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.466311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.466344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.466466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.466499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.466746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.466786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.467008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.467042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.467195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.467228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.467482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.467515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.467734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.467769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.468042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.468079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.468222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.468256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.468415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.468448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.468613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.468646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.468834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.468868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.469064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.469100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.469305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.469338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.469520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.469553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.469805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.469838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.469987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.470023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.470230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.470263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.470486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.470520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.470798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.470830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.470996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.471030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.471297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.471329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.471478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.471508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.471745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.471777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.471995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.472028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.472219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.472249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.472445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.472478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.472599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.472629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.472822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.472853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.473160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.473195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.473390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.473422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.473569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.473601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.473811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.473843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.474047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.474080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.474211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.474241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.474372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.474405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.474601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.474632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.536 qpair failed and we were unable to recover it. 00:25:56.536 [2024-11-20 09:10:12.474847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.536 [2024-11-20 09:10:12.474879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.475140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.475177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.475337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.475371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.475522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.475556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.475746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.475779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.475975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.476015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.476214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.476249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.476513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.476546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.476796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.476829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.477028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.477063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.477223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.477259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.477449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.477483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.477671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.477705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.477843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.477876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.478076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.478110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.478253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.478285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.478488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.478522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.478781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.478815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.478979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.479014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.479174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.479208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.479329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.479363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.479596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.479630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.479826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.479861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.480054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.480092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.480324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.480360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.480495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.480528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.480654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.480686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.480933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.480977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.481192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.481225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.481382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.481415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.481644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.481677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.481999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.482033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.482314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.482348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.482644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.482678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.482869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.482900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.483128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.483164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.483323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.483355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.484861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.484919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.537 qpair failed and we were unable to recover it. 00:25:56.537 [2024-11-20 09:10:12.485171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.537 [2024-11-20 09:10:12.485214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.485422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.485455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.485568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.485604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.485818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.485851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.486075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.486111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.486271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.486304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.486503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.486538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.486865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.486907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.487133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.487170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.487323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.487356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.487493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.487526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.487680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.487713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.487993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.488029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.488225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.488258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.488400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.488433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.488672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.488705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.488930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.488977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.489177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.489210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.489346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.489381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.489672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.489707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.489990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.490025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.490146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.490179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.490385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.490421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.490694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.490728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.491010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.491046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.491281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.491313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.491534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.491565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.491832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.491866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.491996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.492032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.492288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.492323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.492474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.492506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.492791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.492824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.492977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.493012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.493215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.538 [2024-11-20 09:10:12.493249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.538 qpair failed and we were unable to recover it. 00:25:56.538 [2024-11-20 09:10:12.493503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.493573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.493855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.493892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.494105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.494139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.494282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.494315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.494459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.494492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.494699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.494732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.494866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.494899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.495126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.495162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.495357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.495389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.495686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.495707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.495937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.495967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.496147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.496169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.496403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.496436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.496648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.496690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.496892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.496924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.497062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.497101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.497335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.497356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.497542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.497563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.497661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.497680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.497869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.497889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.498059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.498082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.498340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.498374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.498512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.498545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.498738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.498772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.499026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.499063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.499208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.499229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.499333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.499354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.499466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.499487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.499588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.499609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.499834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.499857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.499966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.499987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.500154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.500176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.500409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.500431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.500612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.500634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.500817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.500838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.501074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.501098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.501331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.501353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.539 [2024-11-20 09:10:12.501529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.539 [2024-11-20 09:10:12.501551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.539 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.501777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.501799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.502079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.502103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.502226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.502248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.502365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.502390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.502503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.502522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.502640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.502660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.502777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.502796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.503065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.503087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.503259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.503281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.503490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.503524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.503754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.503787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.504016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.504051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.504306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.504339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.504634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.504654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.504780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.504803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.504962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.504990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.505164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.505185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.505458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.505481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.505672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.505692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.505924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.505946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.506128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.506149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.506235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.506256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.506426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.506446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.506726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.506747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.506981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.507004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.507185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.507208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.507542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.507575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.507725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.507758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.507940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.507987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.508137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.508171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.508360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.508393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.508606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.508639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.508866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.508900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.509041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.509075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.509212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.509247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.509446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.509480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.509700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.509733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.540 [2024-11-20 09:10:12.509929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.540 [2024-11-20 09:10:12.509977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.540 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.510183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.510215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.510400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.510434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.510647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.510671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.510792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.510815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.511083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.511106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.511339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.511362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.511457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.511478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.511643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.511665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.511840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.511880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.512069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.512101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.512213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.512246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.512402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.512434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.512628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.512649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.512758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.512777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.512892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.512914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.513117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.513139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.513236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.513255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.513360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.513386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.513509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.513532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.513649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.513673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.513779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.513800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.514010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.514069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.514199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.514223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.514406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.514428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.514521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.514540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.514645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.514667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.514759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.514780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.514891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.514916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.515025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.515046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.515155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.515176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.515345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.515370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.515494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.515516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.515718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.515739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.515851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.515871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.515981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.516009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.516114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.516135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.516308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.541 [2024-11-20 09:10:12.516331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.541 qpair failed and we were unable to recover it. 00:25:56.541 [2024-11-20 09:10:12.516444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.516467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.516580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.516600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.516756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.516777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.516940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.516974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.517083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.517103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.517204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.517224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.517345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.517369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.517462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.517485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.517579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.517600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.517712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.517733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.517847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.517867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.517964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.517986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.518080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.518101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.518254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.518276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.518390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.518410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.518505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.518524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.518699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.518721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.518886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.518908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.519079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.519102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.519190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.519210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.519301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.519320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.519492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.519516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.519702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.519724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.519836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.519858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.520086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.520109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.520213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.520233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.520338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.520363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.520462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.520483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.520587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.520607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.520713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.520733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.520819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.520841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.520932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.520963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.521045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.521064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.521165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.521184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.521273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.521294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.521388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.521408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.521499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.521519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.521605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.521625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.542 [2024-11-20 09:10:12.521720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.542 [2024-11-20 09:10:12.521741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.542 qpair failed and we were unable to recover it. 00:25:56.543 [2024-11-20 09:10:12.521846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.543 [2024-11-20 09:10:12.521866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.543 qpair failed and we were unable to recover it. 00:25:56.543 [2024-11-20 09:10:12.521958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.543 [2024-11-20 09:10:12.521979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.543 qpair failed and we were unable to recover it. 00:25:56.543 [2024-11-20 09:10:12.522139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.543 [2024-11-20 09:10:12.522161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.543 qpair failed and we were unable to recover it. 00:25:56.543 [2024-11-20 09:10:12.522346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.543 [2024-11-20 09:10:12.522368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.543 qpair failed and we were unable to recover it. 00:25:56.543 [2024-11-20 09:10:12.522477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.543 [2024-11-20 09:10:12.522499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.543 qpair failed and we were unable to recover it. 00:25:56.543 [2024-11-20 09:10:12.522614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.543 [2024-11-20 09:10:12.522636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.543 qpair failed and we were unable to recover it. 00:25:56.543 [2024-11-20 09:10:12.522738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.543 [2024-11-20 09:10:12.522764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.543 qpair failed and we were unable to recover it. 00:25:56.543 [2024-11-20 09:10:12.522885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.543 [2024-11-20 09:10:12.522912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.543 qpair failed and we were unable to recover it. 00:25:56.543 [2024-11-20 09:10:12.523025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.543 [2024-11-20 09:10:12.523058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.543 qpair failed and we were unable to recover it. 00:25:56.543 [2024-11-20 09:10:12.523158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.543 [2024-11-20 09:10:12.523179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.543 qpair failed and we were unable to recover it. 00:25:56.543 [2024-11-20 09:10:12.523270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.543 [2024-11-20 09:10:12.523291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.543 qpair failed and we were unable to recover it. 00:25:56.543 [2024-11-20 09:10:12.523446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.543 [2024-11-20 09:10:12.523466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.543 qpair failed and we were unable to recover it. 00:25:56.543 [2024-11-20 09:10:12.523567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.543 [2024-11-20 09:10:12.523595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.543 qpair failed and we were unable to recover it. 00:25:56.543 [2024-11-20 09:10:12.523736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.543 [2024-11-20 09:10:12.523798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.543 qpair failed and we were unable to recover it. 00:25:56.543 [2024-11-20 09:10:12.523969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.543 [2024-11-20 09:10:12.524009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.543 qpair failed and we were unable to recover it. 00:25:56.543 [2024-11-20 09:10:12.524199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.543 [2024-11-20 09:10:12.524238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.543 qpair failed and we were unable to recover it. 00:25:56.543 [2024-11-20 09:10:12.524367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.543 [2024-11-20 09:10:12.524401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.543 qpair failed and we were unable to recover it. 00:25:56.543 [2024-11-20 09:10:12.524608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.543 [2024-11-20 09:10:12.524643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.543 qpair failed and we were unable to recover it. 00:25:56.543 [2024-11-20 09:10:12.524774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.543 [2024-11-20 09:10:12.524808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.543 qpair failed and we were unable to recover it. 00:25:56.543 [2024-11-20 09:10:12.524945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.543 [2024-11-20 09:10:12.525012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.543 qpair failed and we were unable to recover it. 00:25:56.543 [2024-11-20 09:10:12.525164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.543 [2024-11-20 09:10:12.525207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.543 qpair failed and we were unable to recover it. 00:25:56.543 [2024-11-20 09:10:12.525336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.543 [2024-11-20 09:10:12.525372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.543 qpair failed and we were unable to recover it. 00:25:56.543 [2024-11-20 09:10:12.525558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.543 [2024-11-20 09:10:12.525592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.543 qpair failed and we were unable to recover it. 00:25:56.543 [2024-11-20 09:10:12.525728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.543 [2024-11-20 09:10:12.525765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.543 qpair failed and we were unable to recover it. 00:25:56.543 [2024-11-20 09:10:12.525888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.543 [2024-11-20 09:10:12.525919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.837 qpair failed and we were unable to recover it. 00:25:56.837 [2024-11-20 09:10:12.526053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.837 [2024-11-20 09:10:12.526087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.837 qpair failed and we were unable to recover it. 00:25:56.837 [2024-11-20 09:10:12.526201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.837 [2024-11-20 09:10:12.526234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.837 qpair failed and we were unable to recover it. 00:25:56.837 [2024-11-20 09:10:12.526345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.837 [2024-11-20 09:10:12.526378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.837 qpair failed and we were unable to recover it. 00:25:56.837 [2024-11-20 09:10:12.526561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.837 [2024-11-20 09:10:12.526594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.837 qpair failed and we were unable to recover it. 00:25:56.837 [2024-11-20 09:10:12.526719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.837 [2024-11-20 09:10:12.526751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.837 qpair failed and we were unable to recover it. 00:25:56.837 [2024-11-20 09:10:12.526941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.837 [2024-11-20 09:10:12.526987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.837 qpair failed and we were unable to recover it. 00:25:56.837 [2024-11-20 09:10:12.527124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.837 [2024-11-20 09:10:12.527157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.837 qpair failed and we were unable to recover it. 00:25:56.837 [2024-11-20 09:10:12.527284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.837 [2024-11-20 09:10:12.527319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.837 qpair failed and we were unable to recover it. 00:25:56.837 [2024-11-20 09:10:12.527465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.837 [2024-11-20 09:10:12.527501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.837 qpair failed and we were unable to recover it. 00:25:56.837 [2024-11-20 09:10:12.527611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.837 [2024-11-20 09:10:12.527645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.837 qpair failed and we were unable to recover it. 00:25:56.837 [2024-11-20 09:10:12.527755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.837 [2024-11-20 09:10:12.527792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.837 qpair failed and we were unable to recover it. 00:25:56.837 [2024-11-20 09:10:12.527929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.837 [2024-11-20 09:10:12.527975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.837 qpair failed and we were unable to recover it. 00:25:56.837 [2024-11-20 09:10:12.528108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.837 [2024-11-20 09:10:12.528145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.837 qpair failed and we were unable to recover it. 00:25:56.837 [2024-11-20 09:10:12.528344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.837 [2024-11-20 09:10:12.528378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.837 qpair failed and we were unable to recover it. 00:25:56.837 [2024-11-20 09:10:12.528485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.837 [2024-11-20 09:10:12.528513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.837 qpair failed and we were unable to recover it. 00:25:56.837 [2024-11-20 09:10:12.528616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.837 [2024-11-20 09:10:12.528642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.837 qpair failed and we were unable to recover it. 00:25:56.837 [2024-11-20 09:10:12.528828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.837 [2024-11-20 09:10:12.528850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.837 qpair failed and we were unable to recover it. 00:25:56.837 [2024-11-20 09:10:12.529011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.837 [2024-11-20 09:10:12.529037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.837 qpair failed and we were unable to recover it. 00:25:56.837 [2024-11-20 09:10:12.529257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.837 [2024-11-20 09:10:12.529281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.837 qpair failed and we were unable to recover it. 00:25:56.837 [2024-11-20 09:10:12.529457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.837 [2024-11-20 09:10:12.529478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.837 qpair failed and we were unable to recover it. 00:25:56.837 [2024-11-20 09:10:12.529660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.837 [2024-11-20 09:10:12.529683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.837 qpair failed and we were unable to recover it. 00:25:56.837 [2024-11-20 09:10:12.529851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.837 [2024-11-20 09:10:12.529872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.837 qpair failed and we were unable to recover it. 00:25:56.837 [2024-11-20 09:10:12.530000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.837 [2024-11-20 09:10:12.530023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.837 qpair failed and we were unable to recover it. 00:25:56.837 [2024-11-20 09:10:12.530183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.837 [2024-11-20 09:10:12.530206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.837 qpair failed and we were unable to recover it. 00:25:56.837 [2024-11-20 09:10:12.530308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.837 [2024-11-20 09:10:12.530337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.837 qpair failed and we were unable to recover it. 00:25:56.837 [2024-11-20 09:10:12.530510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.837 [2024-11-20 09:10:12.530543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.837 qpair failed and we were unable to recover it. 00:25:56.837 [2024-11-20 09:10:12.530670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.837 [2024-11-20 09:10:12.530702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.837 qpair failed and we were unable to recover it. 00:25:56.837 [2024-11-20 09:10:12.530816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.837 [2024-11-20 09:10:12.530848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.837 qpair failed and we were unable to recover it. 00:25:56.837 [2024-11-20 09:10:12.530996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.531032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.531154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.531186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.531384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.531406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.531497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.531518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.531689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.531710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.531872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.531893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.532003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.532025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.532113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.532134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.532229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.532252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.532422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.532444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.532544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.532566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.532660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.532682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.532934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.532968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.533060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.533081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.533199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.533221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.533314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.533335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.533518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.533542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.533636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.533657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.533747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.533769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.533860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.533881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.534003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.534024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.534114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.534137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.534230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.534252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.534339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.534361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.534529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.534554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.534733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.534756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.534860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.534880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.534974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.534996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.535114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.535136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.535223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.535244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.535430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.535452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.535541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.535563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.535752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.535774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.535863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.535885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.535972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.535993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.536109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.536130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.536243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.536265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.536361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.838 [2024-11-20 09:10:12.536383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.838 qpair failed and we were unable to recover it. 00:25:56.838 [2024-11-20 09:10:12.536479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.536500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.536660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.536682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.536990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.537013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.537122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.537143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.537257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.537278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.537393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.537414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.537514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.537537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.537710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.537732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.537831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.537851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.537967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.537989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.538103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.538125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.538233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.538254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.538358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.538378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.538547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.538569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.538671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.538693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.538777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.538800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.538897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.538917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.539023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.539044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.539281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.539360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.539604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.539643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.539777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.539811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.539921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.539974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.540164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.540188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.540349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.540372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.540478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.540499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.540606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.540628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.540745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.540769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.540865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.540887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.541044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.541066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.541157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.541177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.541288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.541309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.541466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.541504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.541624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.541656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.541773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.541808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.541937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.541986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.542100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.542134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.542243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.542274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.542396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.839 [2024-11-20 09:10:12.542428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.839 qpair failed and we were unable to recover it. 00:25:56.839 [2024-11-20 09:10:12.542546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.542576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.542690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.542712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.542820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.542840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.542996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.543017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.543105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.543128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.543239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.543262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.543515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.543537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.543634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.543657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.543751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.543771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.543874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.543897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.544002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.544025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.544125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.544146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.544251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.544272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.545391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.545435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.545551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.545574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.545736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.545764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.545946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.545979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.546071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.546093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.546297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.546320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.546477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.546499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.546726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.546747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.546909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.546930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.547029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.547051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.547141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.547164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.547253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.547274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.547366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.547388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.547504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.547525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.547628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.547651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.547741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.547762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.547855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.547877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.548030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.548053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.548138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.548162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.548278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.548301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.548410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.548454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.548591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.548626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.548765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.548797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.548918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.840 [2024-11-20 09:10:12.548960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.840 qpair failed and we were unable to recover it. 00:25:56.840 [2024-11-20 09:10:12.549085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.549120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.841 [2024-11-20 09:10:12.549256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.549289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.841 [2024-11-20 09:10:12.549468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.549511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.841 [2024-11-20 09:10:12.549609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.549632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.841 [2024-11-20 09:10:12.549740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.549760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.841 [2024-11-20 09:10:12.549916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.549943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.841 [2024-11-20 09:10:12.550132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.550153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.841 [2024-11-20 09:10:12.550252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.550274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.841 [2024-11-20 09:10:12.550462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.550483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.841 [2024-11-20 09:10:12.550580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.550600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.841 [2024-11-20 09:10:12.550683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.550703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.841 [2024-11-20 09:10:12.550919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.550940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.841 [2024-11-20 09:10:12.551103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.551124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.841 [2024-11-20 09:10:12.551296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.551318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.841 [2024-11-20 09:10:12.551402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.551423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.841 [2024-11-20 09:10:12.551590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.551611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.841 [2024-11-20 09:10:12.551772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.551795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.841 [2024-11-20 09:10:12.551883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.551905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.841 [2024-11-20 09:10:12.552059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.552081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.841 [2024-11-20 09:10:12.552184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.552205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.841 [2024-11-20 09:10:12.552299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.552319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.841 [2024-11-20 09:10:12.552402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.552421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.841 [2024-11-20 09:10:12.552579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.552601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.841 [2024-11-20 09:10:12.552714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.552736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.841 [2024-11-20 09:10:12.552894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.552913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.841 [2024-11-20 09:10:12.553118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.553140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.841 [2024-11-20 09:10:12.553323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.553343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.841 [2024-11-20 09:10:12.553463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.553484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.841 [2024-11-20 09:10:12.553565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.553585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.841 [2024-11-20 09:10:12.553689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.553709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.841 [2024-11-20 09:10:12.553864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.553886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.841 [2024-11-20 09:10:12.554043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.841 [2024-11-20 09:10:12.554064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.841 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.554222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.554244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.554342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.554362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.554525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.554562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.554723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.554744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.554826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.554846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.554934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.554962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.555136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.555154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.555330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.555350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.555460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.555479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.555580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.555600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.555693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.555713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.555823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.555843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.555938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.555963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.556054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.556074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.556171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.556192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.556280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.556299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.556472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.556492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.556652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.556671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.556855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1b99af0 is same with the state(6) to be set 00:25:56.842 [2024-11-20 09:10:12.557207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.557278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.557424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.557461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.557572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.557596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.557760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.557780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.557888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.557907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.558089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.558109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.558326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.558347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.558435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.558454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.558607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.558627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.558744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.558764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.558859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.558880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.559033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.559054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.559151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.559170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.559357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.559379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.559529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.559547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.559633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.559655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.559835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.559856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.560012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.560031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.842 [2024-11-20 09:10:12.560194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.842 [2024-11-20 09:10:12.560216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.842 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.560309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.560329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.560416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.560436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.560680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.560700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.560820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.560841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.560939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.560966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.561120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.561141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.561226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.561246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.561421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.561441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.561597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.561616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.561699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.561718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.561806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.561826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.561993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.562015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.562130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.562149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.562325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.562345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.562447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.562467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.562725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.562748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.562899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.562920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.563017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.563042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.563208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.563229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.563457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.563478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.563576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.563596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.563757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.563779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.563935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.563982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.564081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.564101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.564270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.564309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.564461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.564480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.564666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.564687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.564855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.564875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.565031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.565054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.565273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.565295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.565411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.565432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.565597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.565619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.565727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.565746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.565993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.566014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.566112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.566133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.566221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.566241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.566394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.566417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.843 [2024-11-20 09:10:12.566498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.843 [2024-11-20 09:10:12.566519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.843 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.566688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.566709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.566795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.566815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.566902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.566921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.567152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.567176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.567346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.567368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.567524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.567545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.567650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.567676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.567873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.567895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.567986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.568007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.568162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.568182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.568270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.568290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.568458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.568478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.568704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.568725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.568923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.568945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.569063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.569084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.569197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.569217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.569372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.569395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.569491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.569511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.569593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.569613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.569710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.569731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.569843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.569865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.570017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.570037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.570208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.570229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.570384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.570405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.570557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.570578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.570742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.570763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.570957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.570979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.571085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.571106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.571284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.571305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.571474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.571496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.571669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.571691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.571849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.571871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.571980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.572002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.572153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.572175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.572290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.572311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.572395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.572416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.572522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.844 [2024-11-20 09:10:12.572543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.844 qpair failed and we were unable to recover it. 00:25:56.844 [2024-11-20 09:10:12.572635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.572658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.572757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.572776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.572963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.572985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.573104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.573125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.573233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.573254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.573347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.573368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.573463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.573484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.573705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.573727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.573833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.573855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.574117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.574141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.574290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.574330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.574508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.574529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.574710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.574731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.574834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.574855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.575006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.575028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.575123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.575144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.575247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.575269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.575529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.575550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.575829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.575850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.575957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.575979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.576082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.576103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.576301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.576322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.576435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.576456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.576561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.576581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.576768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.576791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.577058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.577081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.577240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.577260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.577436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.577457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.577544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.577566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.577731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.577752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.577915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.577938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.578021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.578042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.578256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.578278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.578526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.578547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.578721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.578742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.578935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.578962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.579140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.579161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.845 [2024-11-20 09:10:12.579267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.845 [2024-11-20 09:10:12.579291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.845 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.579461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.579482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.579649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.579670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.579856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.579876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.579970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.579991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.580179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.580201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.580317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.580338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.580504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.580524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.580720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.580741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.580911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.580931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.581024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.581045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.581260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.581281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.581453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.581474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.581721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.581742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.581919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.581939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.582126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.582148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.582313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.582335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.582486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.582508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.582593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.582614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.582709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.582729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.582902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.582923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.583342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.583370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.583538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.583562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.583662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.583684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.583840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.583862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.583962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.583984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.584144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.584165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.584326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.584351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.584501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.584522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.584632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.584653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.584760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.584780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.585002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.585024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.585208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.846 [2024-11-20 09:10:12.585230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.846 qpair failed and we were unable to recover it. 00:25:56.846 [2024-11-20 09:10:12.585345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.585366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.585453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.585473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.585627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.585649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.585758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.585778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.585871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.585893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.585993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.586013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.586191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.586211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.586304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.586325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.586424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.586446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.586619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.586641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.586738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.586759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.586871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.586891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.587055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.587076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.587224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.587245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.587332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.587352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.587505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.587526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.587748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.587780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.587996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.588031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.588154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.588188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.588378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.588411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.588620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.588651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.588779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.588820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.588932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.588979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.589173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.589206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.589341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.589382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.589617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.589638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.589750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.589771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.589854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.589875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.590005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.590028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.590194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.590214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.590302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.590322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.590488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.590509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.590594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.590615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.590768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.590790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.591106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.591129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.591247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.591269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.847 [2024-11-20 09:10:12.591438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.847 [2024-11-20 09:10:12.591462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.847 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.591667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.591693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.591859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.591892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.592051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.592085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.592265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.592298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.592479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.592513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.592692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.592725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.592852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.592884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.593218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.593255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.593453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.593486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.593621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.593653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.593772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.593827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.593925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.593953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.594048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.594066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.594215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.594248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.594331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.594350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.594455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.594475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.594618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.594639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.594799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.594820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.595019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.595041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.595211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.595233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.595318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.595336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.595491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.595511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.595604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.595624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.595720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.595745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.595895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.595915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.596020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.596043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.596270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.596290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.596451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.596470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.596633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.596654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.596752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.596772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.597010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.597030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.597195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.597217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.597385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.597406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.597484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.597503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.597662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.597685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.597844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.597866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.598022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.598044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.848 [2024-11-20 09:10:12.598161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.848 [2024-11-20 09:10:12.598184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.848 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.598353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.598374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.598483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.598504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.598585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.598606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.598848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.598869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.599028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.599049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.599219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.599240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.599331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.599350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.599540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.599560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.599673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.599706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.599884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.599916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.600048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.600081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.600190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.600221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.600342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.600378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.600599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.600630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.600779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.600803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.600899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.600922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.601019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.601039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.601187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.601207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.601287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.601307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.601404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.601426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.601587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.601608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.601704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.601724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.601847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.601867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.601969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.601991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.602160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.602180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.602278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.602298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.602392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.602411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.602498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.602518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.602672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.602691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.602774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.602793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.602884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.602904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.603073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.603095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.603184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.603203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.603292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.603312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.603468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.603489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.603566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.603586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.603685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.603703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.603795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.849 [2024-11-20 09:10:12.603814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.849 qpair failed and we were unable to recover it. 00:25:56.849 [2024-11-20 09:10:12.604008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.604030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.604120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.604140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.604233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.604252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.604330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.604353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.604431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.604450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.604596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.604619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.604697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.604717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.604819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.604843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.604941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.604967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.605055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.605075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.605201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.605223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.605303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.605325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.605486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.605507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.605606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.605628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.605784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.605804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.605967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.605989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.606106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.606127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.606217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.606236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.606326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.606347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.606497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.606517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.606612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.606632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.606717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.606738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.606850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.606872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.606971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.606990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.607146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.607168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.607256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.607276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.607435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.607467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.607589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.607623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.607765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.607797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.607902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.607935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.608064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.608097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.608231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.608264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.608370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.608403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.608593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.608624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.608744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.608776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.608893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.608915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.609087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.850 [2024-11-20 09:10:12.609107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.850 qpair failed and we were unable to recover it. 00:25:56.850 [2024-11-20 09:10:12.609199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.609218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.609304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.609323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.609469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.609491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.609587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.609607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.609751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.609771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.609871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.609893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.609991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.610012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.610115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.610137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.610238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.610258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.610476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.610497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.610718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.610757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.610877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.610908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.611034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.611066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.611182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.611216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.611321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.611351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.611454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.611485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.611605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.611638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.611744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.611776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.611988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.612011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.612102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.612122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.612219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.612240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.612395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.612416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.612517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.612538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.612694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.612715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.612930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.612956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.613147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.613168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.613332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.613352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.613497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.613517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.613716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.613749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.613971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.614005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.614145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.614178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.614293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.614326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.851 qpair failed and we were unable to recover it. 00:25:56.851 [2024-11-20 09:10:12.614521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.851 [2024-11-20 09:10:12.614554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.614665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.614697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.614809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.614860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.615008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.615030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.615116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.615136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.615220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.615240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.615336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.615357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.615455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.615475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.615568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.615589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.615753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.615774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.615851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.615871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.615987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.616007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.616097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.616118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.616273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.616293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.616396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.616415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.616512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.616532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.616632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.616659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.616739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.616759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.616864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.616886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.616981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.617002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.617159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.617180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.617276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.617297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.617463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.617496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.617622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.617655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.617783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.617814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.617995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.618028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.618141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.618173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.618290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.618322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.618429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.618460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.618577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.618616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.618789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.618809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.618889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.618909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.618992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.619013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.619098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.619119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.619275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.619297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.619400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.619421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.619512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.619532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.619632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.619651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.852 qpair failed and we were unable to recover it. 00:25:56.852 [2024-11-20 09:10:12.619752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.852 [2024-11-20 09:10:12.619772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.619850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.619870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.619978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.620000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.620154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.620175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.620269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.620289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.620453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.620475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.620574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.620596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.620752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.620773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.620865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.620885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.620980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.621001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.621159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.621180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.621329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.621350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.621442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.621463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.621563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.621584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.621740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.621761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.621843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.621864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.621985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.622008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.622088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.622108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.622200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.622225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.622328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.622349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.622446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.622487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.622599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.622631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.622812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.622845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.622967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.623001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.623119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.623151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.623342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.623376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.623569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.623601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.623787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.623820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.623935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.624004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.624117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.624149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.624331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.624363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.624568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.624589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.624693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.624715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.624798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.624818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.624915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.624935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.625057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.625077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.625168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.625189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.625286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.853 [2024-11-20 09:10:12.625307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.853 qpair failed and we were unable to recover it. 00:25:56.853 [2024-11-20 09:10:12.625558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.625581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.625671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.625691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.625782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.625803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.625887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.625906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.626151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.626172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.626327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.626349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.626449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.626470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.626578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.626598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.626699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.626727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.626806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.626824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.626912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.626933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.627111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.627132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.627224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.627244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.627393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.627414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.627565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.627585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.627675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.627695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.627777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.627798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.627878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.627897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.627987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.628007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.628101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.628122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.628271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.628291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.628391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.628411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.628562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.628582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.628796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.628816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.628976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.628997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.629084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.629103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.629187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.629208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.629358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.629379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.629469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.629489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.629584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.629605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.629696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.629715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.629869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.629890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.629990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.630013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.630109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.630129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.630226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.630244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.630352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.630371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.630460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.630479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.630558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.854 [2024-11-20 09:10:12.630578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.854 qpair failed and we were unable to recover it. 00:25:56.854 [2024-11-20 09:10:12.630668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.630688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.855 [2024-11-20 09:10:12.630772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.630791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.855 [2024-11-20 09:10:12.630940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.630968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.855 [2024-11-20 09:10:12.631136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.631155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.855 [2024-11-20 09:10:12.631241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.631261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.855 [2024-11-20 09:10:12.631345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.631364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.855 [2024-11-20 09:10:12.631448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.631468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.855 [2024-11-20 09:10:12.631544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.631563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.855 [2024-11-20 09:10:12.631720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.631740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.855 [2024-11-20 09:10:12.631831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.631849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.855 [2024-11-20 09:10:12.631933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.631961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.855 [2024-11-20 09:10:12.632061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.632082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.855 [2024-11-20 09:10:12.632176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.632195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.855 [2024-11-20 09:10:12.632279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.632299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.855 [2024-11-20 09:10:12.632381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.632401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.855 [2024-11-20 09:10:12.632480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.632498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.855 [2024-11-20 09:10:12.632592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.632612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.855 [2024-11-20 09:10:12.632689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.632708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.855 [2024-11-20 09:10:12.632804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.632823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.855 [2024-11-20 09:10:12.632969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.632991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.855 [2024-11-20 09:10:12.633079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.633100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.855 [2024-11-20 09:10:12.633186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.633205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.855 [2024-11-20 09:10:12.633286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.633304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.855 [2024-11-20 09:10:12.633398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.633419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.855 [2024-11-20 09:10:12.633572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.633592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.855 [2024-11-20 09:10:12.633696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.633714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.855 [2024-11-20 09:10:12.633797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.633818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.855 [2024-11-20 09:10:12.633900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.633921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.855 [2024-11-20 09:10:12.634022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.634043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.855 [2024-11-20 09:10:12.634276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.634297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.855 [2024-11-20 09:10:12.634419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.634440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.855 [2024-11-20 09:10:12.634529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.855 [2024-11-20 09:10:12.634547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.855 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.634705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.634726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.634809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.634828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.634937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.634964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.635110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.635130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.635293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.635313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.635414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.635438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.635522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.635541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.635624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.635642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.635744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.635763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.635907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.635927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.636035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.636055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.636206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.636227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.636338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.636359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.636456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.636475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.636569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.636590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.636686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.636704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.636788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.636809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.636980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.637001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.637091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.637112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.637262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.637283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.637374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.637394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.637475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.637495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.637583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.637603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.637698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.637718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.637864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.637885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.638099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.638121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.638206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.638227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.638319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.638339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.638431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.638451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.638537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.638557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.638706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.638728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.638806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.638825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.638910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.638930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.856 [2024-11-20 09:10:12.639120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.856 [2024-11-20 09:10:12.639141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.856 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.639236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.639255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.639341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.639362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.639526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.639546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.639626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.639646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.639759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.639779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.639880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.639901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.640054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.640075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.640171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.640191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.640274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.640294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.640374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.640394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.640493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.640513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.640662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.640682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.640835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.640856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.640960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.640981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.641106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.641127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.641278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.641298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.641393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.641415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.641575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.641595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.641754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.641774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.641852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.641873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.641968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.641987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.642204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.642226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.642386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.642407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.642497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.642515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.642608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.642627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.642713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.642733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.642815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.642835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.642919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.642938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.643098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.643119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.643196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.643215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.643373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.643393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.643547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.643568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.643656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.643673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.643822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.643842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.643924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.643944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.644094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.857 [2024-11-20 09:10:12.644114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.857 qpair failed and we were unable to recover it. 00:25:56.857 [2024-11-20 09:10:12.644264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.644284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.644394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.644414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.644523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.644544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.644645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.644669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.644820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.644841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.644938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.644965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.645060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.645078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.645175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.645195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.645340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.645361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.645440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.645459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.645557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.645576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.645676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.645697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.645796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.645817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.645909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.645931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.646097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.646119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.646210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.646230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.646324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.646344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.646493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.646513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.646665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.646685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.646772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.646794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.646941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.646989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.647143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.647163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.647269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.647288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.647450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.647471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.647553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.647573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.647740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.647760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.647910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.647931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.648113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.648146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.648305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.648326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.648417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.648436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.648539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.648564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.648655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.648677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.648851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.648871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.649004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.649036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.649164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.649196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.649313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.649345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.649454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.649484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.649594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.858 [2024-11-20 09:10:12.649626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.858 qpair failed and we were unable to recover it. 00:25:56.858 [2024-11-20 09:10:12.649748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.649786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.649867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.649888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.649978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.649999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.650108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.650130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.650289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.650308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.650406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.650427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.650517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.650539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.650697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.650718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.650797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.650817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.650925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.650951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.651059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.651079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.651161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.651182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.651331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.651351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.651445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.651465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.651542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.651561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.651646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.651665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.651775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.651795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.651897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.651916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.652017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.652037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.652182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.652206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.652350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.652370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.652455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.652476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.652559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.652579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.652735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.652755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.652846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.652866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.653027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.653049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.653147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.653167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.653269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.653290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.653460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.653481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.653569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.653588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.653675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.653695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.653795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.653816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.653904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.653924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.654085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.654106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.654199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.654219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.654309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.654330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.654435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.654454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.654546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.859 [2024-11-20 09:10:12.654566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.859 qpair failed and we were unable to recover it. 00:25:56.859 [2024-11-20 09:10:12.654657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.654676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.654755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.654776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.654870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.654889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.655004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.655023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.655115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.655136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.655218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.655237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.655320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.655340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.655430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.655451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.655538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.655559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.655723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.655743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.655836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.655858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.656007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.656027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.656112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.656134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.656228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.656248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.656329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.656350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.656434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.656454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.656550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.656569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.656664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.656685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.656763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.656782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.656976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.656997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.657088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.657110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.657201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.657220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.657321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.657341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.657487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.657507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.657672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.657693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.657804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.657825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.657914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.657933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.658026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.658047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.658136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.658157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.658249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.658269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.658358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.658377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.658460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.658481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.658561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.658580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.860 qpair failed and we were unable to recover it. 00:25:56.860 [2024-11-20 09:10:12.658674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.860 [2024-11-20 09:10:12.658696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.658786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.658806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.658897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.658918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.659010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.659030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.659120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.659140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.659227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.659247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.659392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.659411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.659514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.659533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.659624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.659644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.659790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.659811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.659894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.659914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.660012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.660032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.660114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.660135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.660337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.660359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.660440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.660460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.660547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.660567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.660645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.660669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.660761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.660781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.660865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.660886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.660978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.661001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.661096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.661116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.661192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.661211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.661307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.661327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.661413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.661433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.661529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.661550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.661633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.661653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.661748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.661768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.661851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.661870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.661960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.661980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.662062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.662082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.662177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.662197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.662278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.662299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.662380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.662400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.662553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.662573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.662660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.662680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.662759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.662780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.662945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.662970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.663065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.663087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.861 qpair failed and we were unable to recover it. 00:25:56.861 [2024-11-20 09:10:12.663176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.861 [2024-11-20 09:10:12.663196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.663292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.663313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.663405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.663424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.663574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.663596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.663755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.663774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.663882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.663904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.663998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.664017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.664101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.664121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.664274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.664293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.664433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.664451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.664532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.664550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.664630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.664649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.664729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.664749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.664838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.664859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.664944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.664969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.665063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.665082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.665184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.665202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.665301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.665320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.665468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.665486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.665568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.665587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.665745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.665764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.665855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.665875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.665980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.665999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.666089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.666109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.666184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.666204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.666284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.666302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.666380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.666399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.666542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.666562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.666643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.666662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.666767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.666786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.666869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.666889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.666975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.666994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.667146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.667166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.667249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.667268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.667346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.667365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.667450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.667469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.667568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.667588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.667742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.862 [2024-11-20 09:10:12.667761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.862 qpair failed and we were unable to recover it. 00:25:56.862 [2024-11-20 09:10:12.667851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.863 [2024-11-20 09:10:12.667871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.863 qpair failed and we were unable to recover it. 00:25:56.863 [2024-11-20 09:10:12.667963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.863 [2024-11-20 09:10:12.667984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.863 qpair failed and we were unable to recover it. 00:25:56.863 [2024-11-20 09:10:12.668068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.863 [2024-11-20 09:10:12.668087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.863 qpair failed and we were unable to recover it. 00:25:56.863 [2024-11-20 09:10:12.668179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.863 [2024-11-20 09:10:12.668199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.863 qpair failed and we were unable to recover it. 00:25:56.863 [2024-11-20 09:10:12.668285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.863 [2024-11-20 09:10:12.668304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.863 qpair failed and we were unable to recover it. 00:25:56.863 [2024-11-20 09:10:12.668387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.863 [2024-11-20 09:10:12.668407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.863 qpair failed and we were unable to recover it. 00:25:56.863 [2024-11-20 09:10:12.668489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.863 [2024-11-20 09:10:12.668508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.863 qpair failed and we were unable to recover it. 00:25:56.863 [2024-11-20 09:10:12.668596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.863 [2024-11-20 09:10:12.668616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.863 qpair failed and we were unable to recover it. 00:25:56.863 [2024-11-20 09:10:12.668716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.863 [2024-11-20 09:10:12.668735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.863 qpair failed and we were unable to recover it. 00:25:56.863 [2024-11-20 09:10:12.668819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.863 [2024-11-20 09:10:12.668838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.863 qpair failed and we were unable to recover it. 00:25:56.863 [2024-11-20 09:10:12.668924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.863 [2024-11-20 09:10:12.668942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.863 qpair failed and we were unable to recover it. 00:25:56.863 [2024-11-20 09:10:12.669040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.863 [2024-11-20 09:10:12.669060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.863 qpair failed and we were unable to recover it. 00:25:56.863 [2024-11-20 09:10:12.669160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.863 [2024-11-20 09:10:12.669178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.863 qpair failed and we were unable to recover it. 00:25:56.863 [2024-11-20 09:10:12.669264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.863 [2024-11-20 09:10:12.669283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.863 qpair failed and we were unable to recover it. 00:25:56.863 [2024-11-20 09:10:12.669363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.863 [2024-11-20 09:10:12.669382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.863 qpair failed and we were unable to recover it. 00:25:56.863 [2024-11-20 09:10:12.669475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.863 [2024-11-20 09:10:12.669496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.863 qpair failed and we were unable to recover it. 00:25:56.863 [2024-11-20 09:10:12.669579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.863 [2024-11-20 09:10:12.669597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.863 qpair failed and we were unable to recover it. 00:25:56.863 [2024-11-20 09:10:12.669695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.863 [2024-11-20 09:10:12.669714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.863 qpair failed and we were unable to recover it. 00:25:56.863 [2024-11-20 09:10:12.669806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.863 [2024-11-20 09:10:12.669825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.863 qpair failed and we were unable to recover it. 00:25:56.863 [2024-11-20 09:10:12.669977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.863 [2024-11-20 09:10:12.669998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.863 qpair failed and we were unable to recover it. 00:25:56.863 [2024-11-20 09:10:12.670141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.863 [2024-11-20 09:10:12.670159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.863 qpair failed and we were unable to recover it. 00:25:56.863 [2024-11-20 09:10:12.670244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.863 [2024-11-20 09:10:12.670265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.863 qpair failed and we were unable to recover it. 00:25:56.863 [2024-11-20 09:10:12.670362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.863 [2024-11-20 09:10:12.670380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.863 qpair failed and we were unable to recover it. 00:25:56.863 [2024-11-20 09:10:12.670465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.863 [2024-11-20 09:10:12.670485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.863 qpair failed and we were unable to recover it. 00:25:56.863 [2024-11-20 09:10:12.670560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.863 [2024-11-20 09:10:12.670580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.863 qpair failed and we were unable to recover it. 00:25:56.863 [2024-11-20 09:10:12.670674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.863 [2024-11-20 09:10:12.670692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.863 qpair failed and we were unable to recover it. 00:25:56.863 [2024-11-20 09:10:12.670784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.863 [2024-11-20 09:10:12.670805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.863 qpair failed and we were unable to recover it. 00:25:56.863 [2024-11-20 09:10:12.670886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.863 [2024-11-20 09:10:12.670904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.863 qpair failed and we were unable to recover it. 00:25:56.863 [2024-11-20 09:10:12.670996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.863 [2024-11-20 09:10:12.671015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.863 qpair failed and we were unable to recover it. 00:25:56.863 [2024-11-20 09:10:12.671089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.863 [2024-11-20 09:10:12.671107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.863 qpair failed and we were unable to recover it. 00:25:56.863 [2024-11-20 09:10:12.671248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.863 [2024-11-20 09:10:12.671266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.863 qpair failed and we were unable to recover it. 00:25:56.863 [2024-11-20 09:10:12.671341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.671358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.671434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.671452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.671542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.671561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.671635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.671653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.671742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.671763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.671846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.671865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.671944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.671967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.672073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.672092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.672169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.672188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.672262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.672281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.672423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.672442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.672515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.672532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.672633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.672652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.672793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.672814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.672905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.672923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.673010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.673029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.673112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.673130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.673273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.673292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.673375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.673394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.673489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.673508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.673595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.673617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.673713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.673732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.673818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.673844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.673929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.673953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.674127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.674147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.674286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.674304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.674387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.674408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.674552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.674570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.674652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.674673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.674829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.674847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.674930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.674957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.675040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.675062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.675140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.675161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.675241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.675259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.675335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.675355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.675434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.675453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.675526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.675545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.864 [2024-11-20 09:10:12.675690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.864 [2024-11-20 09:10:12.675711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.864 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.675794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.675813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.675909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.675928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.676101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.676122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.676206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.676227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.676325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.676344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.676504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.676523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.676991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.677022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.677125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.677146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.677248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.677269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.677369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.677389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.677551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.677574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.677965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.677989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.678106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.678127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.678213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.678232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.678318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.678338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.678419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.678439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.678531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.678551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.678697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.678717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.678797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.678816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.678909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.678928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.679086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.679111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.679264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.679283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.679378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.679398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.679476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.679495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.679575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.679596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.679759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.679778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.679853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.679872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.680020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.680041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.680128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.680148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.680323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.680342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.680441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.680461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.680560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.680580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.680664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.680683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.680766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.680786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.680877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.680896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.680985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.681004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.681152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.681173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.865 qpair failed and we were unable to recover it. 00:25:56.865 [2024-11-20 09:10:12.681259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.865 [2024-11-20 09:10:12.681278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.681360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.681379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.681481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.681501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.681650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.681671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.681747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.681767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.681843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.681864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.681946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.681970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.682054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.682072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.682323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.682344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.682427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.682447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.682557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.682577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.682661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.682681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.682831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.682850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.682924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.682944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.683096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.683117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.683216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.683236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.683312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.683331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.683421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.683440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.683529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.683548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.683642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.683660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.683740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.683763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.683842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.683862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.684025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.684046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.684263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.684284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.684365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.684385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.684462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.684481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.684577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.684597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.684744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.684765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.684843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.684863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.684993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.685014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.685107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.685126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.685206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.685225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.685320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.685339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.685487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.685507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.685588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.685607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.685696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.685715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.685813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.866 [2024-11-20 09:10:12.685834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.866 qpair failed and we were unable to recover it. 00:25:56.866 [2024-11-20 09:10:12.686001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.686021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.686104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.686124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.686219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.686238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.686329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.686347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.686459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.686479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.686557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.686577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.686655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.686673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.686822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.686842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.686929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.686953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.687030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.687050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.687132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.687151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.687228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.687246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.687323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.687342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.687424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.687444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.687538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.687560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.687646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.687665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.687815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.687835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.687918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.687936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.688102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.688122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.688209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.688228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.688332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.688352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.688438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.688457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.688541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.688561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.688654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.688673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.688816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.688835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.688985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.689005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.689090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.689109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.689186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.689205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.689318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.689337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.689418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.689437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.689527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.689547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.689705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.689725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.689806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.689827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.689919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.689938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.690092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.690111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.690198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.690218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.690295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.690313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.867 qpair failed and we were unable to recover it. 00:25:56.867 [2024-11-20 09:10:12.690399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.867 [2024-11-20 09:10:12.690419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.690499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.690517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.690596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.690616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.690764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.690784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.690943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.690972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.691058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.691077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.691169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.691190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.691341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.691361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.691516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.691536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.691617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.691635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.691712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.691731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.691804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.691822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.691988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.692009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.692085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.692104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.692191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.692209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.692356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.692377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.692456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.692476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.692689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.692709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.692790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.692810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.692989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.693010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.693100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.693120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.693211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.693231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.693312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.693330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.693406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.693426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.693510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.693529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.693611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.693630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.693729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.693748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.693898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.693920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.694091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.694111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.694274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.694293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.694481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.694520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.694643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.694677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.868 [2024-11-20 09:10:12.694799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.868 [2024-11-20 09:10:12.694829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.868 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.695019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.695055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.695166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.695199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.695315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.695347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.695585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.695605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.695686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.695706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.695794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.695813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.695908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.695926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.696027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.696046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.696145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.696164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.696311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.696332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.696471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.696490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.696564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.696583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.696754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.696774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.696876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.696895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.696982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.697002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.697088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.697109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.697188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.697207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.697290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.697309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.697391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.697410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.697553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.697573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.697656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.697675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.697823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.697843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.697929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.697953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.698044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.698064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.698217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.698236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.698312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.698332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.698484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.698503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.698580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.698599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.698682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.698701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.698780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.698800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.698894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.698913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.699000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.699019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.699096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.699115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.699267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.699286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.699444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.699462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.699536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.699555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.699643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.699662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.869 [2024-11-20 09:10:12.699890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.869 [2024-11-20 09:10:12.699909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.869 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.700004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.700023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.700108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.700131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.700206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.700224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.700372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.700392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.700487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.700506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.700607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.700626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.700777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.700795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.700884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.700904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.701064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.701084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.701232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.701250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.701334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.701353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.701454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.701473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.701578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.701597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.701745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.701765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.701911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.701930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.702087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.702107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.702188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.702207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.702350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.702369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.702457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.702476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.702574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.702594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.702681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.702702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.702882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.702902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.703088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.703107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.703276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.703309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.703496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.703529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.703816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.703849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.703971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.703992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.704139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.704159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.704338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.704383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.704552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.704583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.704755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.704788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.704905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.704926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.705011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.705044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.705200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.705220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.705397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.705417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.705573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.705594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.870 [2024-11-20 09:10:12.705758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.870 [2024-11-20 09:10:12.705779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.870 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.705945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.706026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.706134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.706164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.706361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.706394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.706510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.706543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.706790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.706822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.707071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.707105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.707342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.707363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.707460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.707479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.707645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.707666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.707774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.707793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.707987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.708006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.708104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.708125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.708225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.708245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.708341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.708364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.708440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.708459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.708608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.708629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.708708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.708726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.708802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.708823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.708994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.709019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.709186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.709208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.709422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.709442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.709521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.709541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.709696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.709716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.709807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.709828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.709988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.710010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.710116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.710135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.710310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.710330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.710498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.710519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.710612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.710632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.710722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.710742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.710895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.710915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.711087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.711107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.711364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.711437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.711583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.711621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.711798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.711831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.712003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.712044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.712266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.712300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.871 qpair failed and we were unable to recover it. 00:25:56.871 [2024-11-20 09:10:12.712486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.871 [2024-11-20 09:10:12.712519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.712697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.712729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.712915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.712960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.713199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.713230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.713483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.713507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.713602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.713622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.713719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.713739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.713830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.713850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.713931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.713962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.714066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.714087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.714194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.714214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.714398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.714419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.714516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.714536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.714638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.714659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.714878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.714899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.715061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.715081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.715230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.715267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.715512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.715544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.715660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.715692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.715869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.715890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.716107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.716141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.716336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.716368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.716489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.716520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.716645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.716678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.716866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.716898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.717035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.717055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.717241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.717262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.717451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.717472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.717557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.717578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.717676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.717696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.717808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.717828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.718072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.718096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.718261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.872 [2024-11-20 09:10:12.718282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.872 qpair failed and we were unable to recover it. 00:25:56.872 [2024-11-20 09:10:12.718433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.718453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.718533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.718555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.718778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.718799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.718965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.718986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.719149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.719171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.719254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.719274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.719430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.719451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.719618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.719637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.719788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.719810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.719892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.719911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.720026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.720054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.720224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.720244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.720409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.720431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.720648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.720668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.720830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.720851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.720976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.720998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.721165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.721186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.721459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.721479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.721641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.721663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.721837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.721857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.721933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.721959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.722202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.722222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.722507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.722528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.722699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.722720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.722930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.722955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.723064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.723085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.723239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.723260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.723336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.723357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.723567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.723588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.723693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.723713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.723868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.723889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.723991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.724013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.724179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.724200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.724353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.724373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.724465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.724486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.724658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.724678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.724781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.724803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.724966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.873 [2024-11-20 09:10:12.724987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.873 qpair failed and we were unable to recover it. 00:25:56.873 [2024-11-20 09:10:12.725157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.725178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.725338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.725359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.725556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.725577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.725726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.725746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.725905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.725926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.726158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.726240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.726462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.726500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.726676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.726709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.726841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.726877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.727012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.727044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.727150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.727187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.727372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.727404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.727584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.727618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.727732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.727753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.727843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.727863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.727952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.727973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.728046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.728064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.728160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.728180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.728327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.728347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.728447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.728468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.728638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.728658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.728809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.728830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.728936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.728964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.729116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.729137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.729328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.729349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.729453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.729473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.729621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.729640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.729749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.729772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.729936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.729965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.730045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.730066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.730182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.730202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.730365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.730387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.730479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.730502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.730593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.730614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.730767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.730788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.730893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.730914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.731076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.731096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.874 [2024-11-20 09:10:12.731321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.874 [2024-11-20 09:10:12.731356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.874 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.731465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.731497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.731627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.731659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.731772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.731804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.731970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.731991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.732141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.732162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.732320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.732340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.732427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.732446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.732550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.732570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.732676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.732697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.732858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.732877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.733039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.733060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.733162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.733183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.733367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.733407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.733516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.733548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.733723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.733755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.733866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.733897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.734037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.734059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.734225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.734245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.734343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.734363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.734454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.734474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.734622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.734643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.734803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.734842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.734965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.734998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.735124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.735156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.735285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.735316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.735484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.735515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.735712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.735743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.735964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.735985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.736147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.736168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.736253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.736274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.736526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.736557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.736744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.736776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.736901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.736934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.737129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.737163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.737354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.737384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.737499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.737533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.737637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.875 [2024-11-20 09:10:12.737669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.875 qpair failed and we were unable to recover it. 00:25:56.875 [2024-11-20 09:10:12.737858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.737890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.738107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.738139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.738277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.738308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.738444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.738476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.738603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.738634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.738803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.738834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.738968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.739003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.739118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.739138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.739230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.739251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.739360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.739379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.739596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.739618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.739765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.739785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.739945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.739975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.740075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.740095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.740203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.740222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.740370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.740390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.740605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.740626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.740732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.740753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.740843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.740864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.740959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.740981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.741074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.741094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.741191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.741210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.741302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.741324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.741497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.741516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.741682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.741702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.741807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.741829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.742091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.742114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.742210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.742230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.742341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.742363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.742517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.742537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.742625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.742645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.742795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.742835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.743074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.743107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.743234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.743266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.743390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.743422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.743651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.743685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.743866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.743886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.743998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.744019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.876 [2024-11-20 09:10:12.744260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.876 [2024-11-20 09:10:12.744281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.876 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.744442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.744463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.744631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.744663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.744910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.744943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.745147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.745178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.745380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.745413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.745511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.745539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.745712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.745755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.745928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.745956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.746066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.746087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.746197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.746217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.746454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.746486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.746660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.746691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.746862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.746893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.747153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.747193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.747395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.747426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.747542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.747574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.747749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.747781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.747905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.747936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.748215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.748249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.748423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.748456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.748724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.748757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.748933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.748959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.749078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.749098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.749265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.749287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.749416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.749446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.749621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.749654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.749827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.749859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.750052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.750073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.750223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.750244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.750398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.750418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.750562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.750583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.750685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.750706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.750865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.877 [2024-11-20 09:10:12.750886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.877 qpair failed and we were unable to recover it. 00:25:56.877 [2024-11-20 09:10:12.751099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.751133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.751318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.751351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.751548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.751578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.751791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.751822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.751963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.751996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.752122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.752155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.752337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.752368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.752645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.752684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.752856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.752877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.752978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.752997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.753165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.753185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.753280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.753299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.753468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.753489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.753591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.753611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.753715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.753736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.753893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.753935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.754144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.754177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.754368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.754401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.754569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.754600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.754852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.754873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.755023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.755046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.755212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.755250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.755436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.755470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.755644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.755677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.755848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.755879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.756007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.756041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.756307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.756340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.756604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.756636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.756777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.756810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.756998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.757034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.757201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.757221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.757412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.757445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.757633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.757666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.757836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.757867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.758106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.758128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.758215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.758235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.758313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.758334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.758430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.758450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.878 qpair failed and we were unable to recover it. 00:25:56.878 [2024-11-20 09:10:12.758686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.878 [2024-11-20 09:10:12.758707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.758919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.758939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.759119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.759140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.759328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.759348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.759511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.759531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.759695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.759717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.759828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.759849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.759945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.759971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.760130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.760150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.760233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.760252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.760406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.760427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.760643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.760664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.760820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.760839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.761001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.761046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.761174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.761207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.761331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.761363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.761543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.761573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.761771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.761805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.762046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.762068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.762245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.762277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.762407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.762439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.762565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.762597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.762711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.762744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.763006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.763044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.763156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.763177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.763337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.763358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.763450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.763471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.763684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.763705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.763870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.763889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.764043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.764066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.764269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.764301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.764423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.764454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.764627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.764659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.764769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.764799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.764971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.765006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.765173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.765194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.765369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.765400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.765614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.765651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.765777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.879 [2024-11-20 09:10:12.765809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.879 qpair failed and we were unable to recover it. 00:25:56.879 [2024-11-20 09:10:12.765984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.766019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.766221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.766254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.766446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.766480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.766681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.766712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.766905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.766937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.767084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.767116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.767232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.767265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.767439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.767470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.767658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.767691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.767871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.767902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.768037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.768070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.768283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.768304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.768453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.768474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.768632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.768652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.768809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.768830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.769048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.769070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.769183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.769202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.769364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.769385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.769610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.769643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.769830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.769861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.770079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.770119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.770303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.770323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.770423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.770443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.770542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.770561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.770666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.770685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.770847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.770872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.771036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.771057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.771293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.771313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.771458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.771496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.771636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.771669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.771787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.771817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.771945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.771990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.772166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.772187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.772342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.772361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.772462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.772481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.772566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.772587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.772763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.772784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.772937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.772977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.880 qpair failed and we were unable to recover it. 00:25:56.880 [2024-11-20 09:10:12.773062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.880 [2024-11-20 09:10:12.773082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.773181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.773201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.773294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.773314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.773467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.773488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.773582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.773601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.773707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.773727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.773875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.773896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.774010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.774030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.774140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.774160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.774323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.774342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.774437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.774458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.774685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.774704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.774803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.774823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.774985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.775006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.775152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.775176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.775271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.775291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.775460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.775479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.775560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.775580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.775733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.775753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.775843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.775862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.775962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.775982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.776081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.776100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.776263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.776283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.776452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.776471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.776565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.776585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.776756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.776778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.776924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.776944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.777053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.777074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.777178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.777199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.777283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.777302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.777483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.777504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.777651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.777670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.777831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.777862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.778075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.778110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.778295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.778326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.778553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.881 [2024-11-20 09:10:12.778584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.881 qpair failed and we were unable to recover it. 00:25:56.881 [2024-11-20 09:10:12.778772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.778804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.778981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.779014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.779204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.779223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.779437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.779457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.779698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.779730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.779846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.779877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.780070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.780105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.780291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.780312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.780467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.780486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.780648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.780679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.780865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.780898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.781078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.781110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.781350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.781371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.781458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.781477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.781632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.781652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.781803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.781824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.781920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.781940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.782046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.782066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.782217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.782236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.782407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.782428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.782521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.782540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.782649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.782668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.782818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.782837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.782945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.782971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.783088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.783109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.783199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.783218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.783380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.783401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.783557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.783589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.783714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.783744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.783981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.784015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.784129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.784151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.784313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.784331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.784488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.784510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.784601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.784621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.784834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.784854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.785004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.882 [2024-11-20 09:10:12.785025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.882 qpair failed and we were unable to recover it. 00:25:56.882 [2024-11-20 09:10:12.785127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.785147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.788181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.788216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.788456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.788487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.788656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.788688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.788802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.788833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.788971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.789003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.789181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.789202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.789359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.789379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.789536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.789555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.789716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.789735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.789943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.789973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.790143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.790175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.790294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.790325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.790504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.790535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.790661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.790692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.790902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.790934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.791114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.791146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.791378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.791398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.791549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.791569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.791745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.791788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.792003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.792036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.792337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.792370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.792478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.792509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.792682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.792713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.792901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.792939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.793116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.793138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.793292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.793313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.793427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.793447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.793543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.793563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.793657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.793676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.793831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.793851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.793932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.793958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.794118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.794150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.794357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.794389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.794574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.794607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.794720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.794760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.794853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.794874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.795024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.883 [2024-11-20 09:10:12.795048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.883 qpair failed and we were unable to recover it. 00:25:56.883 [2024-11-20 09:10:12.795156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.884 [2024-11-20 09:10:12.795176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.884 qpair failed and we were unable to recover it. 00:25:56.884 [2024-11-20 09:10:12.795283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.884 [2024-11-20 09:10:12.795303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.884 qpair failed and we were unable to recover it. 00:25:56.884 [2024-11-20 09:10:12.795477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.884 [2024-11-20 09:10:12.795497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.884 qpair failed and we were unable to recover it. 00:25:56.884 [2024-11-20 09:10:12.795730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.884 [2024-11-20 09:10:12.795750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.884 qpair failed and we were unable to recover it. 00:25:56.884 [2024-11-20 09:10:12.795918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.884 [2024-11-20 09:10:12.795938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.884 qpair failed and we were unable to recover it. 00:25:56.884 [2024-11-20 09:10:12.796036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.884 [2024-11-20 09:10:12.796054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.884 qpair failed and we were unable to recover it. 00:25:56.884 [2024-11-20 09:10:12.796294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.884 [2024-11-20 09:10:12.796327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.884 qpair failed and we were unable to recover it. 00:25:56.884 [2024-11-20 09:10:12.796463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.884 [2024-11-20 09:10:12.796494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.884 qpair failed and we were unable to recover it. 00:25:56.884 [2024-11-20 09:10:12.796678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.884 [2024-11-20 09:10:12.796710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.884 qpair failed and we were unable to recover it. 00:25:56.884 [2024-11-20 09:10:12.796882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.884 [2024-11-20 09:10:12.796912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.884 qpair failed and we were unable to recover it. 00:25:56.884 [2024-11-20 09:10:12.797113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.884 [2024-11-20 09:10:12.797134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.884 qpair failed and we were unable to recover it. 00:25:56.884 [2024-11-20 09:10:12.797316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.884 [2024-11-20 09:10:12.797348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.884 qpair failed and we were unable to recover it. 00:25:56.884 [2024-11-20 09:10:12.797539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.884 [2024-11-20 09:10:12.797570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.884 qpair failed and we were unable to recover it. 00:25:56.884 [2024-11-20 09:10:12.797772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.884 [2024-11-20 09:10:12.797806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.884 qpair failed and we were unable to recover it. 00:25:56.884 [2024-11-20 09:10:12.798008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.884 [2024-11-20 09:10:12.798041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.884 qpair failed and we were unable to recover it. 00:25:56.884 [2024-11-20 09:10:12.798277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.884 [2024-11-20 09:10:12.798310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.884 qpair failed and we were unable to recover it. 00:25:56.884 [2024-11-20 09:10:12.798445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.884 [2024-11-20 09:10:12.798477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.884 qpair failed and we were unable to recover it. 00:25:56.884 [2024-11-20 09:10:12.798678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.884 [2024-11-20 09:10:12.798710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.884 qpair failed and we were unable to recover it. 00:25:56.884 [2024-11-20 09:10:12.798893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.884 [2024-11-20 09:10:12.798923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.884 qpair failed and we were unable to recover it. 00:25:56.884 [2024-11-20 09:10:12.799122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.884 [2024-11-20 09:10:12.799143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.884 qpair failed and we were unable to recover it. 00:25:56.884 [2024-11-20 09:10:12.799243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.884 [2024-11-20 09:10:12.799263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.884 qpair failed and we were unable to recover it. 00:25:56.884 [2024-11-20 09:10:12.799437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.884 [2024-11-20 09:10:12.799456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.884 qpair failed and we were unable to recover it. 00:25:56.884 [2024-11-20 09:10:12.799673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.884 [2024-11-20 09:10:12.799694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.884 qpair failed and we were unable to recover it. 00:25:56.884 [2024-11-20 09:10:12.799842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.884 [2024-11-20 09:10:12.799862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.884 qpair failed and we were unable to recover it. 00:25:56.884 [2024-11-20 09:10:12.799940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.884 [2024-11-20 09:10:12.799966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.884 qpair failed and we were unable to recover it. 00:25:56.884 [2024-11-20 09:10:12.800210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.884 [2024-11-20 09:10:12.800231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.884 qpair failed and we were unable to recover it. 00:25:56.884 [2024-11-20 09:10:12.800384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.884 [2024-11-20 09:10:12.800404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.884 qpair failed and we were unable to recover it. 00:25:56.884 [2024-11-20 09:10:12.800505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.884 [2024-11-20 09:10:12.800530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.884 qpair failed and we were unable to recover it. 00:25:56.884 [2024-11-20 09:10:12.800699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.884 [2024-11-20 09:10:12.800720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.884 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.800835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.800856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.801000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.801022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.801140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.801162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.801325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.801344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.801491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.801513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.801667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.801688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.801853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.801874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.802089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.802110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.802269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.802289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.802450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.802471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.802551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.802569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.802752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.802773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.802929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.802980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.803162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.803194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.803435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.803467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.803637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.803668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.803799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.803830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.803968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.804002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.804187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.804208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.804421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.804441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.804607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.804628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.804802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.804823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.804983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.805004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.805182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.805203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.805310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.805330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.805422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.805442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.805666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.805687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.805899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.805919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.806084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.806106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.806196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.806218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.806368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.806387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.806541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.806562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.806779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.806801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.806955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.806977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.807075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.807095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.807269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.885 [2024-11-20 09:10:12.807289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.885 qpair failed and we were unable to recover it. 00:25:56.885 [2024-11-20 09:10:12.807473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.807494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.807598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.807617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.807777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.807802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.807901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.807920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.808102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.808124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.808343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.808364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.808473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.808493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.808583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.808602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.808701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.808723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.808807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.808825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.808920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.808941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.809094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.809114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.809350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.809370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.809582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.809603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.809684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.809703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.809805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.809825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.809906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.809925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.810114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.810136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.810299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.810319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.810422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.810443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.810539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.810565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.810666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.810686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.810850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.810873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.811031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.811051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.811238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.811259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.811504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.811535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.811649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.811681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.811871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.811903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.812207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.812228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.812308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.812332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.812427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.812447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.812589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.886 [2024-11-20 09:10:12.812610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.886 qpair failed and we were unable to recover it. 00:25:56.886 [2024-11-20 09:10:12.812838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.812858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.813025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.813046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.813215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.813236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.813354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.813374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.813524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.813545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.813646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.813665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.813826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.813847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.813998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.814019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.814208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.814229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.814383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.814403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.814570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.814590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.814719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.814739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.814840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.814859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.815022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.815043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.815125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.815144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.815257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.815278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.815444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.815465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.815552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.815570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.815781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.815802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.815960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.815980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.816128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.816149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.816380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.816401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.816522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.816544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.816694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.816714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.816876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.816899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.816993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.817014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.817160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.817180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.817331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.817353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.817451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.817470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.817634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.817654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.817896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.817916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.818076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.818098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.818200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.887 [2024-11-20 09:10:12.818220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.887 qpair failed and we were unable to recover it. 00:25:56.887 [2024-11-20 09:10:12.818387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.818408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.818575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.818596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.818698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.818721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.818873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.818893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.819081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.819102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.819196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.819216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.819422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.819444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.819542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.819561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.819650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.819669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.819832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.819853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.820063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.820084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.820232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.820253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.820496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.820517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.820727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.820747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.820833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.820853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.821003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.821025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.821252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.821273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.821484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.821503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.821603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.821631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.821781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.821799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.821881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.821900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.822052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.822084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.822266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.822288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.822404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.822427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.822642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.822664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.822810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.822831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.823049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.823072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.823171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.823191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.823296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.823318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.823538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.823557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.823711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.823732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.823897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.823917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.824177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.824212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.824383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.824414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.824651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.824685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.824961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.824995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.825114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.825146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.888 [2024-11-20 09:10:12.825355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.888 [2024-11-20 09:10:12.825377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.888 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.825543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.825564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.825726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.825746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.825835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.825856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.825975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.825995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.826099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.826121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.826203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.826222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.826368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.826388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.826545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.826563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.826716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.826744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.826836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.826855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.827075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.827095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.827187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.827207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.827420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.827441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.827541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.827560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.827659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.827680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.827771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.827791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.827973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.827995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.828076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.828095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.828312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.828332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.828494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.828515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.828700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.828721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.828801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.828825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.828979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.829002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.829180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.829200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.829368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.829388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.829488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.829508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.829655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.829676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.829830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.829850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.830106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.830127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.830299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.830320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.830410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.830429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.830593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.830614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.830770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.830790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.830873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.830893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.831048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.831069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.831236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.831257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.889 [2024-11-20 09:10:12.831476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.889 [2024-11-20 09:10:12.831496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.889 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.831663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.831684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.831843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.831864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.831963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.831984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.832080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.832100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.832192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.832214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.832319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.832339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.832504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.832524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.832682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.832703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.832866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.832886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.833045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.833065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.833166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.833186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.833346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.833370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.833557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.833588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.833765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.833796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.833915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.833958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.834087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.834118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.834352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.834374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.834457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.834476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.834626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.834646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.834773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.834792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.834953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.834975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.835080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.835103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.835246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.835266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.835484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.835505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.835605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.835625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.835779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.835799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.835889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.835909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.836123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.836145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.836387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.836419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.836649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.836681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.836821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.836855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.836973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.837005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.837129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.837149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.837246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.837266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.890 qpair failed and we were unable to recover it. 00:25:56.890 [2024-11-20 09:10:12.837476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.890 [2024-11-20 09:10:12.837496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.891 qpair failed and we were unable to recover it. 00:25:56.891 [2024-11-20 09:10:12.837678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.891 [2024-11-20 09:10:12.837698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.891 qpair failed and we were unable to recover it. 00:25:56.891 [2024-11-20 09:10:12.837859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.891 [2024-11-20 09:10:12.837881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.891 qpair failed and we were unable to recover it. 00:25:56.891 [2024-11-20 09:10:12.837980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.891 [2024-11-20 09:10:12.838000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.891 qpair failed and we were unable to recover it. 00:25:56.891 [2024-11-20 09:10:12.838146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.891 [2024-11-20 09:10:12.838166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.891 qpair failed and we were unable to recover it. 00:25:56.891 [2024-11-20 09:10:12.838252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.891 [2024-11-20 09:10:12.838272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.891 qpair failed and we were unable to recover it. 00:25:56.891 [2024-11-20 09:10:12.838432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.891 [2024-11-20 09:10:12.838452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.891 qpair failed and we were unable to recover it. 00:25:56.891 [2024-11-20 09:10:12.838599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.891 [2024-11-20 09:10:12.838619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.891 qpair failed and we were unable to recover it. 00:25:56.891 [2024-11-20 09:10:12.838770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.891 [2024-11-20 09:10:12.838791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.891 qpair failed and we were unable to recover it. 00:25:56.891 [2024-11-20 09:10:12.838884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.891 [2024-11-20 09:10:12.838903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.891 qpair failed and we were unable to recover it. 00:25:56.891 [2024-11-20 09:10:12.839075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.891 [2024-11-20 09:10:12.839096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.891 qpair failed and we were unable to recover it. 00:25:56.891 [2024-11-20 09:10:12.839217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.891 [2024-11-20 09:10:12.839239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.891 qpair failed and we were unable to recover it. 00:25:56.891 [2024-11-20 09:10:12.839400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.891 [2024-11-20 09:10:12.839421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.891 qpair failed and we were unable to recover it. 00:25:56.891 [2024-11-20 09:10:12.839501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.891 [2024-11-20 09:10:12.839521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.891 qpair failed and we were unable to recover it. 00:25:56.891 [2024-11-20 09:10:12.839669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.891 [2024-11-20 09:10:12.839700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.891 qpair failed and we were unable to recover it. 00:25:56.891 [2024-11-20 09:10:12.839854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.891 [2024-11-20 09:10:12.839876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.891 qpair failed and we were unable to recover it. 00:25:56.891 [2024-11-20 09:10:12.840034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.891 [2024-11-20 09:10:12.840056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.891 qpair failed and we were unable to recover it. 00:25:56.891 [2024-11-20 09:10:12.840229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.891 [2024-11-20 09:10:12.840267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.891 qpair failed and we were unable to recover it. 00:25:56.891 [2024-11-20 09:10:12.840512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.891 [2024-11-20 09:10:12.840545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.891 qpair failed and we were unable to recover it. 00:25:56.891 [2024-11-20 09:10:12.840746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.891 [2024-11-20 09:10:12.840780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.891 qpair failed and we were unable to recover it. 00:25:56.891 [2024-11-20 09:10:12.840954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.891 [2024-11-20 09:10:12.840975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.891 qpair failed and we were unable to recover it. 00:25:56.891 [2024-11-20 09:10:12.841204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.891 [2024-11-20 09:10:12.841225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.891 qpair failed and we were unable to recover it. 00:25:56.891 [2024-11-20 09:10:12.841377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.891 [2024-11-20 09:10:12.841398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.891 qpair failed and we were unable to recover it. 00:25:56.891 [2024-11-20 09:10:12.841576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:56.891 [2024-11-20 09:10:12.841597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:56.891 qpair failed and we were unable to recover it. 00:25:56.891 [2024-11-20 09:10:12.841753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.841774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.841985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.842008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.842219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.842241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.842340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.842360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.842517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.842537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.842631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.842650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.842794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.842815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.842968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.842991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.843161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.843182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.843325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.843345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.843443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.843462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.843643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.843664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.843863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.843883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.844035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.844057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.844162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.844184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.844282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.844302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.844446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.844465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.844629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.844648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.844742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.844762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.844868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.844888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.845062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.845082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.845235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.845259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.845358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.845378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.845475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.845495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.845589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.845609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.845753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.845773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.845869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.845888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.845973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.845993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.846070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.846089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.846247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.846267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.846370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.846390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.846489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.846508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.177 [2024-11-20 09:10:12.846604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.177 [2024-11-20 09:10:12.846623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.177 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.846702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.846722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.846895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.846916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.847161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.847182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.847259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.847280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.847426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.847445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.847525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.847544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.847635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.847655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.847836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.847856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.848012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.848032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.848114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.848135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.848230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.848248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.848346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.848366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.848525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.848545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.848785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.848803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.848906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.848925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.849041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.849065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.849157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.849176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.849341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.849360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.849505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.849524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.849610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.849630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.849740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.849760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.849918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.849937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.850085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.850105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.850270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.850290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.850474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.850494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.850592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.850612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.850767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.850786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.850978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.850998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.851084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.851104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.851326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.851345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.851430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.851451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.851616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.851636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.851733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.851753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.851991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.852012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.852224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.852244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.852342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.852361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.852473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.852491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.178 qpair failed and we were unable to recover it. 00:25:57.178 [2024-11-20 09:10:12.852581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.178 [2024-11-20 09:10:12.852600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.852760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.852779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.852962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.852983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.853060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.853079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.853246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.853266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.853503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.853532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.853761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.853791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.853905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.853937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.854129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.854163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.854359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.854402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.854500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.854519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.854746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.854766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.854929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.854955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.855050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.855069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.855236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.855257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.855411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.855431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.855582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.855602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.855772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.855792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.855895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.855915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.856096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.856143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.856280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.856306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.856411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.856436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.856549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.856572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.856730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.856754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.856862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.856885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.857135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.857160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.857318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.857342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.857498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.857524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.857690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.857714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.857806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.857829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.857999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.858024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.858132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.858156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.858245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.858274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.858681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.179 [2024-11-20 09:10:12.858707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.179 qpair failed and we were unable to recover it. 00:25:57.179 [2024-11-20 09:10:12.858810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.858832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.858989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.859013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.859166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.859192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.859281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.859303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.859416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.859439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.859605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.859628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.859822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.859846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.859965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.859987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.860152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.860176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.860371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.860395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.860560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.860584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.860754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.860792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.860936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.860980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.861198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.861230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.861424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.861447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.861615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.861640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.861732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.861753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.861854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.861877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.862002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.862029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.862217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.862239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.862456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.862479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.862640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.862665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.862752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.862775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.862936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.862970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.863203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.863227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.863334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.863357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.863522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.863545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.863713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.863736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.863971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.864005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.864120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.864156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.864397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.864429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.864677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.864699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.864818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.864842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.864995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.865021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.865196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.865220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.865469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.865493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.865578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.865600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.180 qpair failed and we were unable to recover it. 00:25:57.180 [2024-11-20 09:10:12.865720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.180 [2024-11-20 09:10:12.865744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.181 qpair failed and we were unable to recover it. 00:25:57.181 [2024-11-20 09:10:12.865921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.181 [2024-11-20 09:10:12.865954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.181 qpair failed and we were unable to recover it. 00:25:57.181 [2024-11-20 09:10:12.866043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.181 [2024-11-20 09:10:12.866064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.181 qpair failed and we were unable to recover it. 00:25:57.181 [2024-11-20 09:10:12.866285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.181 [2024-11-20 09:10:12.866307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.181 qpair failed and we were unable to recover it. 00:25:57.181 [2024-11-20 09:10:12.866481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.181 [2024-11-20 09:10:12.866507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.181 qpair failed and we were unable to recover it. 00:25:57.181 [2024-11-20 09:10:12.866604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.181 [2024-11-20 09:10:12.866626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.181 qpair failed and we were unable to recover it. 00:25:57.181 [2024-11-20 09:10:12.866779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.181 [2024-11-20 09:10:12.866800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.181 qpair failed and we were unable to recover it. 00:25:57.181 [2024-11-20 09:10:12.866879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.181 [2024-11-20 09:10:12.866900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.181 qpair failed and we were unable to recover it. 00:25:57.181 [2024-11-20 09:10:12.867083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.181 [2024-11-20 09:10:12.867105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.181 qpair failed and we were unable to recover it. 00:25:57.181 [2024-11-20 09:10:12.867211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.181 [2024-11-20 09:10:12.867236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.181 qpair failed and we were unable to recover it. 00:25:57.181 [2024-11-20 09:10:12.867455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.181 [2024-11-20 09:10:12.867478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.181 qpair failed and we were unable to recover it. 00:25:57.181 [2024-11-20 09:10:12.867559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.181 [2024-11-20 09:10:12.867580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.181 qpair failed and we were unable to recover it. 00:25:57.181 [2024-11-20 09:10:12.867747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.181 [2024-11-20 09:10:12.867769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.181 qpair failed and we were unable to recover it. 00:25:57.181 [2024-11-20 09:10:12.867929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.181 [2024-11-20 09:10:12.867956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.181 qpair failed and we were unable to recover it. 00:25:57.181 [2024-11-20 09:10:12.868114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.181 [2024-11-20 09:10:12.868136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.181 qpair failed and we were unable to recover it. 00:25:57.181 [2024-11-20 09:10:12.868247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.181 [2024-11-20 09:10:12.868271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.181 qpair failed and we were unable to recover it. 00:25:57.181 [2024-11-20 09:10:12.868542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.181 [2024-11-20 09:10:12.868564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.181 qpair failed and we were unable to recover it. 00:25:57.181 [2024-11-20 09:10:12.868713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.181 [2024-11-20 09:10:12.868734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.181 qpair failed and we were unable to recover it. 00:25:57.181 [2024-11-20 09:10:12.868969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.181 [2024-11-20 09:10:12.869003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.181 qpair failed and we were unable to recover it. 00:25:57.181 [2024-11-20 09:10:12.869179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.181 [2024-11-20 09:10:12.869212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.181 qpair failed and we were unable to recover it. 00:25:57.181 [2024-11-20 09:10:12.869328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.181 [2024-11-20 09:10:12.869360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.181 qpair failed and we were unable to recover it. 00:25:57.181 [2024-11-20 09:10:12.869570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.181 [2024-11-20 09:10:12.869602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.181 qpair failed and we were unable to recover it. 00:25:57.181 [2024-11-20 09:10:12.869786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.181 [2024-11-20 09:10:12.869818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.181 qpair failed and we were unable to recover it. 00:25:57.181 [2024-11-20 09:10:12.869999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.181 [2024-11-20 09:10:12.870022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.181 qpair failed and we were unable to recover it. 00:25:57.181 [2024-11-20 09:10:12.870242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.181 [2024-11-20 09:10:12.870264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.181 qpair failed and we were unable to recover it. 00:25:57.181 [2024-11-20 09:10:12.870444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.181 [2024-11-20 09:10:12.870465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.181 qpair failed and we were unable to recover it. 00:25:57.181 [2024-11-20 09:10:12.870587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.181 [2024-11-20 09:10:12.870609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.181 qpair failed and we were unable to recover it. 00:25:57.181 [2024-11-20 09:10:12.870773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.181 [2024-11-20 09:10:12.870795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.181 qpair failed and we were unable to recover it. 00:25:57.181 [2024-11-20 09:10:12.870893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.181 [2024-11-20 09:10:12.870919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.181 qpair failed and we were unable to recover it. 00:25:57.181 [2024-11-20 09:10:12.871082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.181 [2024-11-20 09:10:12.871103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.871203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.871223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.871314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.871335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.871528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.871551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.871729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.871751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.871939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.871967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.872241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.872264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.872373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.872395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.872503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.872525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.872618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.872640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.872739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.872762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.872929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.872959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.873183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.873210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.873376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.873398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.873557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.873578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.873764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.873786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.874043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.874067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.874284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.874307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.874405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.874426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.874607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.874629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.874798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.874821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.874912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.874934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.875034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.875056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.875158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.875181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.875341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.875363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.875556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.875578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.875674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.875694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.875866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.875888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.875997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.876019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.876182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.876210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.876371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.876398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.876643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.876675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.876857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.876890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.877029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.877061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.877262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.877295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.877537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.182 [2024-11-20 09:10:12.877570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.182 qpair failed and we were unable to recover it. 00:25:57.182 [2024-11-20 09:10:12.877811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.877844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.877944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.877987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.878180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.878213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.878333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.878366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.878483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.878525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.878707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.878733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.878970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.878999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.879199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.879226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.879337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.879366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.879552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.879578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.879800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.879826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.879991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.880019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.880197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.880224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.880339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.880366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.880525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.880553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.880728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.880755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.880927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.880966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.881069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.881096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.881270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.881297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.881473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.881499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.881621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.881648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.881757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.881785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.882023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.882052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.882156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.882183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.882373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.882400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.882561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.882588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.882750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.882777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.882958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.882987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.883093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.883121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.883233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.883258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.883431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.883459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.883632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.883658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.883755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.183 [2024-11-20 09:10:12.883780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.183 qpair failed and we were unable to recover it. 00:25:57.183 [2024-11-20 09:10:12.883940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.883978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.884086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.884114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.884274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.884300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.884482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.884509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.884672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.884710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.884897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.884929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.885058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.885101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.885330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.885355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.885524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.885551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.885800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.885826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.885967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.885995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.886175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.886203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.886426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.886453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.886577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.886606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.886734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.886763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.886883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.886911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.887016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.887046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.887239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.887269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.887378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.887407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.887570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.887600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.887783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.887812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.887904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.887930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.888148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.888178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.888303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.888338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.888515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.888544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.888660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.888688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.888879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.888909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.889026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.889056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.889243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.889272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.889395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.889424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.889555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.889585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.889772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.889800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.889900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.889931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.890114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.890142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.890267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.890297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.890490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.890522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.890641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.890673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.184 qpair failed and we were unable to recover it. 00:25:57.184 [2024-11-20 09:10:12.890848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.184 [2024-11-20 09:10:12.890879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.891134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.891167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.891304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.891335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.891530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.891561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.891697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.891730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.891965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.891998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.892263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.892294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.892530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.892560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.892660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.892688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.892824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.892854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.893018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.893048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.893139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.893166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.893276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.893304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.893482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.893521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.893769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.893798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.893895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.893922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.894113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.894143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.894242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.894271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.894456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.894484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.894659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.894690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.894817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.894846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.895014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.895045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.895336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.895366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.895487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.895516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.895679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.895708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.895906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.895936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.896126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.896157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.896268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.896297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.896559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.896588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.896709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.896738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.896973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.897005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.897119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.897153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.897279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.897311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.897507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.897539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.897713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.897745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.897861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.897892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.898069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.898102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.898285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.185 [2024-11-20 09:10:12.898315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.185 qpair failed and we were unable to recover it. 00:25:57.185 [2024-11-20 09:10:12.898442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.898471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.898665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.898693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.898957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.898987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.899168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.899198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.899310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.899338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.899636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.899668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.899878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.899911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.900113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.900145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.900290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.900320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.900487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.900517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.900681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.900711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.900886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.900929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.901127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.901161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.901332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.901364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.901546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.901576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.901833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.901867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.902066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.902096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.902208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.902238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.902513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.902545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.902658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.902690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.902866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.902897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.903016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.903048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.903165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.903194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.903375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.903404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.903516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.903544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.903738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.903767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.903889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.903918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.904139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.904188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.904365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.904388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.904590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.904625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.904819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.904853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.904980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.905013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.186 [2024-11-20 09:10:12.905290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.186 [2024-11-20 09:10:12.905324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.186 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.905436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.905467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.905585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.905616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.905861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.905894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.906092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.906126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.906255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.906294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.906400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.906422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.906520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.906540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.906787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.906808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.906896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.906915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.907107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.907140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.907404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.907434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.907670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.907703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.907830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.907862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.907981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.908016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.908153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.908185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.908363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.908408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.908613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.908645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.908838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.908872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.909088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.909123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.909237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.909269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.909478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.909510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.909625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.909657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.909756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.909787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.909915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.909957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.910196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.910228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.910348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.910379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.910582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.910614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.910875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.910905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.911105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.911137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.911276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.911307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.911496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.911518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.911756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.911787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.911915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.911957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.912142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.912174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.912354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.912385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.912589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.912611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.912735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.912770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.912975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.187 [2024-11-20 09:10:12.913009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.187 qpair failed and we were unable to recover it. 00:25:57.187 [2024-11-20 09:10:12.913276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.913308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.913490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.913522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.913701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.913733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.913859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.913892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.914006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.914028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.914196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.914217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.914309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.914334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.914483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.914503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.914583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.914602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.914706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.914725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.914865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.914885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.915119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.915141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.915238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.915258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.915362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.915382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.915540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.915561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.915657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.915675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.915829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.915849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.916045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.916066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.916171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.916191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.916346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.916366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.916468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.916486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.916744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.916765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.916915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.916935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.917160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.917181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.917285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.917306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.917413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.917436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.917514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.917533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.917619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.917639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.917791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.917810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.917966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.917996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.918155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.918177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.918287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.918309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.918394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.918414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.918579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.918599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.918755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.918775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.918883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.918903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.918995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.919016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.919172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.188 [2024-11-20 09:10:12.919194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.188 qpair failed and we were unable to recover it. 00:25:57.188 [2024-11-20 09:10:12.919279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.919298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.919402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.919421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.919604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.919626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.919774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.919795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.919958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.919980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.920087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.920107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.920258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.920279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.920505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.920526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.920612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.920630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.920850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.920871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.920984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.921006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.921106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.921126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.921236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.921257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.921407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.921427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.921530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.921553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.921649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.921670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.921822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.921843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.922006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.922030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.922187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.922207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.922318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.922340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.922498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.922518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.922659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.922681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.922772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.922792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.923060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.923081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.923190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.923211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.923422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.923442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.923681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.923702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.923847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.923868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.923967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.923987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.924141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.924162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.924348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.924378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.924586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.924618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.924723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.924761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.924961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.924997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.925117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.925149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.925357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.925389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.925495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.925515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.925626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.925647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.189 [2024-11-20 09:10:12.925862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.189 [2024-11-20 09:10:12.925883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.189 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.926164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.926206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.926376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.926408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.926531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.926569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.926686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.926719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.926916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.926956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.927206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.927239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.927353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.927374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.927454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.927473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.927575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.927594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.927769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.927790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.927935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.927962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.928121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.928141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.928368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.928388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.928493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.928514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.928615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.928635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.928726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.928746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.928909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.928929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.929210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.929232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.929387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.929407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.929573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.929595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.929702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.929723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.929882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.929902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.929983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.930004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.930161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.930183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.930327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.930348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.930439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.930457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.930620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.930642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.930807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.930827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.930920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.930939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.931095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.931117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.931219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.931239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.931409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.931429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.931594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.931615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.931778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.931799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.931876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.931894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.931997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.932019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.932109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.932128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.932284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.190 [2024-11-20 09:10:12.932307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.190 qpair failed and we were unable to recover it. 00:25:57.190 [2024-11-20 09:10:12.932388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.932409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.932500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.932519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.932672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.932695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.932849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.932875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.933031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.933052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.933237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.933259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.933421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.933443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.933552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.933573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.933656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.933675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.933771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.933789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.933964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.933989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.934150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.934171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.934325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.934363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.934498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.934531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.934702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.934735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.934854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.934885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.935012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.935048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.935247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.935280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.935414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.935444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.935673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.935694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.935772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.935791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.935883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.935902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.936061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.936082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.936163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.936183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.936282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.936301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.936454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.936475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.936556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.936575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.936789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.936809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.937032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.937053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.937214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.937236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.937390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.937428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.937555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.937588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.191 qpair failed and we were unable to recover it. 00:25:57.191 [2024-11-20 09:10:12.937808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.191 [2024-11-20 09:10:12.937846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.938086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.938119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.938313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.938346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.938467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.938500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.938670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.938691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.938848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.938869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.939039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.939061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.939237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.939257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.939445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.939466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.939635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.939655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.939812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.939833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.939920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.939945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.940053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.940073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.940175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.940196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.940377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.940397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.940553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.940575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.940724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.940744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.940898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.940920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.941101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.941123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.941201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.941220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.941306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.941329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.941446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.941469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.941618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.941639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.941805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.941826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.942065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.942087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.942181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.942202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.942373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.942394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.942473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.942497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.942581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.942600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.942748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.942770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.943012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.943034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.943130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.943149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.943257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.943278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.943371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.943392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.943494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.943516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.943604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.943624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.943708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.943727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.943889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.943910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.944154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.192 [2024-11-20 09:10:12.944176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.192 qpair failed and we were unable to recover it. 00:25:57.192 [2024-11-20 09:10:12.944262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.944284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.944385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.944407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.944507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.944527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.944670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.944691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.944906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.944927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.945042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.945063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.945173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.945194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.945342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.945363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.945514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.945535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.945620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.945639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.945788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.945810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.945907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.945929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.946087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.946109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.946320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.946341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.946530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.946551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.946630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.946651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.946798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.946819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.946968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.946989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.947083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.947104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.947323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.947344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.947451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.947472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.947735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.947756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.947889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.947910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.947999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.948019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.948124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.948144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.948321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.948343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.948493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.948513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.948609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.948628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.948853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.948874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.949052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.949074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.949168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.949188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.949285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.949308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.949544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.949577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.949706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.949740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.949888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.949921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.950039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.950072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.950352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.950386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.950647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.193 [2024-11-20 09:10:12.950680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.193 qpair failed and we were unable to recover it. 00:25:57.193 [2024-11-20 09:10:12.950919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.950961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.951203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.951236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.951486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.951519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.951782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.951814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.951937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.951981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.952168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.952201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.952327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.952359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.952543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.952563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.952726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.952747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.952998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.953034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.953178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.953211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.953415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.953436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.953626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.953646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.953834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.953855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.954070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.954091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.954183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.954203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.954425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.954447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.954535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.954554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.954808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.954833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.954930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.954955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.955076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.955097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.955189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.955209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.955312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.955333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.955482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.955501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.955714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.955734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.955899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.955937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.956214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.956245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.956438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.956471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.956561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.956581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.956800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.956821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.956985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.957007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.957122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.957143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.957272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.957293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.957452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.957473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.957704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.957737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.957867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.957899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.958041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.958074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.958248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.194 [2024-11-20 09:10:12.958280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.194 qpair failed and we were unable to recover it. 00:25:57.194 [2024-11-20 09:10:12.958388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.958419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.958604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.958626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.958865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.958886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.959073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.959108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.959236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.959269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.959449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.959481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.959716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.959737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.959969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.959995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.960214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.960235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.960335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.960353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.960452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.960474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.960637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.960656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.960733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.960752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.960866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.960885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.961035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.961058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.961238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.961259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.961352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.961371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.961587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.961607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.961700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.961720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.961879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.961899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.962069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.962090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.962258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.962279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.962450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.962470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.962558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.962577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.962809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.962830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.962930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.962962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.963112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.963132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.963238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.963264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.963480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.963502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.963652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.963672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.963774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.963798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.963982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.964003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.964180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.964200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.964430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.195 [2024-11-20 09:10:12.964450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.195 qpair failed and we were unable to recover it. 00:25:57.195 [2024-11-20 09:10:12.964558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.964581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.964727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.964748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.964901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.964922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.965021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.965041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.965283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.965304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.965519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.965540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.965684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.965704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.965811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.965833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.965932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.965958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.966110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.966129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.966279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.966301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.966461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.966482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.966574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.966594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.966680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.966699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.966794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.966815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.966973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.966995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.967209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.967229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.967322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.967341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.967508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.967528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.967713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.967733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.967824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.967844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.967992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.968013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.968107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.968127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.968287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.968308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.968533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.968554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.968635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.968654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.968751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.968771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.968921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.968941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.969255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.969277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.969440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.969462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.969542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.969562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.969715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.969736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.969881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.969901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.970003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.970024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.970196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.970217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.970369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.970391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.970494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.970514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.970633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.196 [2024-11-20 09:10:12.970654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.196 qpair failed and we were unable to recover it. 00:25:57.196 [2024-11-20 09:10:12.970801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.970822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.971075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.971108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.971217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.971251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.971454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.971487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.971615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.971636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.971805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.971826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.971984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.972006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.972099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.972118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.972287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.972308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.972473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.972494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.972573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.972593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.972693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.972714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.972798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.972818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.972973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.972994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.973145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.973166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.973308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.973328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.973435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.973457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.973535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.973553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.973709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.973729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.973873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.973899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.973999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.974020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.974132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.974155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.974312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.974332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.974424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.974444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.974616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.974637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.974785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.974806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.974897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.974916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.975025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.975046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.975139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.975166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.975316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.975337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.975576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.975602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.975748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.975769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.975858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.975879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.976022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.976064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.976308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.976328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.976487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.976527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.976662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.976694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.976864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.197 [2024-11-20 09:10:12.976896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.197 qpair failed and we were unable to recover it. 00:25:57.197 [2024-11-20 09:10:12.977203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.977236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.977367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.977400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.977587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.977620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.977789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.977822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.978002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.978035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.978211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.978245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.978499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.978531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.978722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.978743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.978843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.978862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.979025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.979048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.979157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.979178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.979325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.979346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.979470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.979491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.979587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.979607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.979775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.979796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.979967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.979990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.980088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.980108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.980256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.980278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.980370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.980390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.980544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.980567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.980657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.980676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.980832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.980854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.980982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.981003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.981113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.981134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.981301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.981321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.981531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.981552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.981708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.981728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.981837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.981859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.981936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.981962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.982062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.982081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.982249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.982268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.982363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.982383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.982588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.982608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.982704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.982723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.982867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.982887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.983054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.983076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.983290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.983310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.983467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.198 [2024-11-20 09:10:12.983488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.198 qpair failed and we were unable to recover it. 00:25:57.198 [2024-11-20 09:10:12.983730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.983762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.984004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.984037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.984157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.984190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.984372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.984405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.984616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.984647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.984767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.984788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.984958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.984980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.985072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.985092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.985271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.985310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.985433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.985468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.985657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.985690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.985808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.985839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.986015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.986047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.986216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.986249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.986355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.986387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.986525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.986562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.986774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.986795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.986888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.986908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.987006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.987025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.987215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.987237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.987383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.987404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.987573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.987595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.987757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.987779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.988016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.988038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.988259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.988293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.988536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.988568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.988771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.988804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.989018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.989051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.989235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.989269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.989383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.989416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.989679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.989718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.989809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.989829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.989934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.989963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.990175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.990194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.990345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.990377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.990545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.990566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.990722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.990742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.990906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.990927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.199 qpair failed and we were unable to recover it. 00:25:57.199 [2024-11-20 09:10:12.991124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.199 [2024-11-20 09:10:12.991145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.991252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.991273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.991443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.991463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.991557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.991578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.991809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.991830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.992027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.992049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.992262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.992282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.992446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.992467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.992645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.992666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.992747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.992766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.992937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.992965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.993159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.993183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.993277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.993297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.993398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.993418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.993566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.993587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.993681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.993701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.993846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.993867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.993971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.993991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.994151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.994171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.994397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.994418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.994509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.994528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.994722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.994743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.994849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.994870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.995017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.995038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.995147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.995169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.995419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.995441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.995541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.995561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.995647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.995666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.995902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.995922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.996039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.996059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.996141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.996161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.996352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.996373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.996456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.996477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.996566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.200 [2024-11-20 09:10:12.996585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.200 qpair failed and we were unable to recover it. 00:25:57.200 [2024-11-20 09:10:12.996743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:12.996764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:12.996998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:12.997021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:12.997178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:12.997199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:12.997299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:12.997325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:12.997491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:12.997516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:12.997617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:12.997637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:12.997827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:12.997848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:12.998009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:12.998031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:12.998142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:12.998162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:12.998317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:12.998338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:12.998559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:12.998580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:12.998736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:12.998757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:12.998919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:12.998940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:12.999093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:12.999114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:12.999204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:12.999224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:12.999459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:12.999479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:12.999560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:12.999580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:12.999670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:12.999690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:12.999874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:12.999895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:13.000051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:13.000073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:13.000297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:13.000319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:13.000405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:13.000424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:13.000524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:13.000544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:13.000701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:13.000721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:13.000882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:13.000903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:13.001056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:13.001078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:13.001238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:13.001260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:13.001348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:13.001368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:13.001533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:13.001554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:13.001722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:13.001743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:13.001898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:13.001919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:13.002138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:13.002162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:13.002332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:13.002353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:13.002502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:13.002523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:13.002613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:13.002632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:13.002715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:13.002735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:13.002915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:13.002937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:13.003045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:13.003065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.201 qpair failed and we were unable to recover it. 00:25:57.201 [2024-11-20 09:10:13.003154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.201 [2024-11-20 09:10:13.003175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.003274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.003298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.003381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.003401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.003488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.003507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.003586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.003606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.003706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.003726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.003968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.003990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.004075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.004096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.004175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.004195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.004291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.004310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.004416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.004436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.004591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.004613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.004768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.004789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.004960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.004981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.005078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.005099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.005208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.005231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.005330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.005350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.005444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.005464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.005627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.005646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.005739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.005759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.005910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.005931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.006039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.006059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.006272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.006294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.006375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.006395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.006475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.006494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.006667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.006688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.006812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.006832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.006940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.006968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.007185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.007207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.007303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.007322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.007560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.007581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.007679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.007699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.007853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.007874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.007970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.007991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.008093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.008114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.008334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.008356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.008463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.008484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.008700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.008720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.202 [2024-11-20 09:10:13.008822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.202 [2024-11-20 09:10:13.008843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.202 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.008930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.008961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.009120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.009140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.009294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.009315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.009539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.009560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.009720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.009741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.009971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.009994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.010182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.010203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.010309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.010330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.010477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.010498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.010594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.010613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.010756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.010777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.010995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.011017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.011233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.011255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.011351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.011370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.011527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.011548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.011723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.011743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.011911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.011932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.012091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.012114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.012258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.012279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.012366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.012385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.012490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.012510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.012679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.012700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.012786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.012810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.012920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.012942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.013117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.013139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.013226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.013246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.013461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.013482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.013578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.013598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.013739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.013760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.013907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.013929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.014179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.014201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.014288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.014308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.014448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.014469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.014577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.014598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.014820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.014842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.014999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.015021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.015218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.015251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.015366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.203 [2024-11-20 09:10:13.015398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.203 qpair failed and we were unable to recover it. 00:25:57.203 [2024-11-20 09:10:13.015510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.015532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.015687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.015706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.015860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.015882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.016030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.016052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.016145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.016164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.016403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.016424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.016571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.016592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.016695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.016716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.016930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.016956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.017112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.017133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.017304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.017338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.017579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.017618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.017880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.017912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.018107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.018141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.018250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.018283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.018403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.018434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.018635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.018668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.018853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.018875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.019035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.019057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.019156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.019176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.019271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.019293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.019453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.019474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.019571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.019591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.019806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.019827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.019914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.019934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.020036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.020056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.020200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.020221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.020312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.020332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.020437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.020457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.020619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.020640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.020732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.020753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.020840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.020860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.021030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.021052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.021208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.021229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.021334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.021353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.021440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.021461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.021605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.021625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.021713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.204 [2024-11-20 09:10:13.021733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.204 qpair failed and we were unable to recover it. 00:25:57.204 [2024-11-20 09:10:13.021907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.021928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.022048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.022071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.022158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.022177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.022263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.022283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.022449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.022469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.022583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.022604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.022867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.022899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.023043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.023077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.023257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.023290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.023466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.023499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.023765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.023785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.023873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.023893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.024048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.024069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.024258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.024280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.024394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.024416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.024516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.024535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.024762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.024784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.024883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.024902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.025122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.025144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.025246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.025270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.025372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.025393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.025534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.025553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.025713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.025734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.025836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.025856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.026001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.026022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.026112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.026132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.026223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.026242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.026384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.026405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.026509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.026530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.026687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.026708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.026883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.026904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.027156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.027178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.027259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.027280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.027376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.205 [2024-11-20 09:10:13.027396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.205 qpair failed and we were unable to recover it. 00:25:57.205 [2024-11-20 09:10:13.027552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.027572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.027723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.027744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.027926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.027968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.028133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.028153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.028235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.028255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.028350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.028369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.028455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.028475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.028623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.028648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.028736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.028755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.028992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.029013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.029190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.029212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.029380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.029401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.029568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.029589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.029805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.029826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.030000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.030023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.030140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.030161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.030323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.030344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.030447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.030470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.030613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.030634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.030807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.030828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.030980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.031001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.031220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.031241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.031410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.031432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.031523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.031542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.031645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.031666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.031886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.031907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.032087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.032109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.032267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.032288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.032485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.032506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.032666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.032687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.032775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.032805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.033040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.033075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.033214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.033247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.033393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.033414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.033506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.033530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.033623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.033650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.033756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.033776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.033931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.033965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.206 qpair failed and we were unable to recover it. 00:25:57.206 [2024-11-20 09:10:13.034176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.206 [2024-11-20 09:10:13.034196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.034345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.034366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.034523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.034544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.034765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.034786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.034983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.035005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.035108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.035127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.035279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.035299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.035385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.035404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.035588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.035608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.035876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.035897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.036073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.036094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.036178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.036198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.036360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.036381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.036610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.036642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.036853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.036885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.037020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.037054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.037262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.037294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.037420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.037451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.037585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.037619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.037734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.037755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.037835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.037855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.038012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.038034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.038252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.038273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.038363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.038388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.038489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.038510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.038615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.038635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.038804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.038823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.038985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.039006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.039172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.039193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.039342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.039362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.039516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.039537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.039695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.039717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.039880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.039902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.040051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.040073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.040237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.040257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.040425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.040446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.040615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.040636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.040740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.040760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.207 [2024-11-20 09:10:13.040923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.207 [2024-11-20 09:10:13.040944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.207 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.041041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.041063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.041168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.041189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.041269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.041288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.041366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.041385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.041540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.041562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.041654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.041673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.041887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.041907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.042019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.042045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.042262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.042285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.042449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.042470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.042552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.042572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.042652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.042671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.042770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.042791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.043008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.043030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.043187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.043209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.043306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.043325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.043485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.043505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.043662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.043685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.043784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.043805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.043892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.043911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.044060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.044082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.044252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.044273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.044362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.044382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.044468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.044488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.044584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.044607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.044703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.044723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.044907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.044926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.045093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.045115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.045263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.045283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.045368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.045387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.045606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.045628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.045720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.045740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.045843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.045863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.045973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.045997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.046588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.046620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.046856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.046879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.047120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.047144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.047341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.208 [2024-11-20 09:10:13.047363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.208 qpair failed and we were unable to recover it. 00:25:57.208 [2024-11-20 09:10:13.047517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.047539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.047652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.047674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.047843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.047864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.047978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.048004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.048225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.048248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.048464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.048485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.048644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.048666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.048853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.048874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.048990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.049011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.049104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.049125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.049223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.049251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.049335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.049355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.049514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.049535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.049622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.049642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.049794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.049819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.049988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.050012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.050110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.050132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.050206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.050227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.050321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.050342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.050488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.050508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.050602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.050621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.050789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.050810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.050913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.050933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.051098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.051118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.051218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.051238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.051395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.051416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.051527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.051550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.051649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.051670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.051767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.051786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.051869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.051888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.051969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.051989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.052149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.052171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.209 qpair failed and we were unable to recover it. 00:25:57.209 [2024-11-20 09:10:13.052256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.209 [2024-11-20 09:10:13.052275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.052422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.052443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.052528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.052548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.052626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.052646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.052753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.052773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.052862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.052883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.053057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.053079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.053172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.053193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.053289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.053310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.053467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.053491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.053595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.053615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.053792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.053812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.053903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.053923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.054042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.054065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.054155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.054175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.054268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.054287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.054378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.054398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.054487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.054506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.054584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.054604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.054763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.054784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.054873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.054892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.055116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.055138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.055240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.055261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.055346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.055365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.055460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.055483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.055577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.055597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.055742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.055764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.055846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.055866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.055971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.055992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.056137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.056157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.056252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.056271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.056352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.056371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.056461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.056481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.056631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.056652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.056754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.056774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.056939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.056965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.057119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.057140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.057223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.057242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.210 [2024-11-20 09:10:13.057388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.210 [2024-11-20 09:10:13.057409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.210 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.057558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.057579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.057661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.057680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.057782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.057802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.057914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.057936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.058050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.058070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.058230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.058250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.058355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.058374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.058525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.058545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.058631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.058651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.058806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.058826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.058980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.059001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.059086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.059107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.059196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.059215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.059295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.059314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.059462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.059483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.059599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.059620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.059710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.059729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.059810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.059830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.059974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.059997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.060080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.060100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.060252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.060273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.060356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.060376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.060600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.060621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.060778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.060799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.060898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.060919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.061009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.061030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.061245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.061267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.061359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.061378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.061485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.061504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.061589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.061610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.061686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.061705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.061853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.061874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.061970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.061990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.062206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.062227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.062321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.062343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.062420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.062439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.062600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.062620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.062713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.211 [2024-11-20 09:10:13.062733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.211 qpair failed and we were unable to recover it. 00:25:57.211 [2024-11-20 09:10:13.062826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.062852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.062953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.062974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.063131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.063154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.063319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.063341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.063531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.063563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.063692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.063725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.063929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.063975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.064084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.064117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.064231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.064264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.064438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.064471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.064664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.064695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.064886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.064919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.065108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.065143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.065259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.065291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.065492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.065527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.065708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.065741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.065983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.066004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.066090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.066109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.066193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.066212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.066359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.066379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.066524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.066545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.066643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.066662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.066755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.066782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.066865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.066884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.066981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.067001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.067146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.067166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.067307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.067326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.067405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.067435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.067522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.067541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.067619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.067638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.067781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.067804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.067896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.067917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.068033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.068053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.068130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.068150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.068247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.068267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.068413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.068432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.068523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.068543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.068628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.212 [2024-11-20 09:10:13.068647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.212 qpair failed and we were unable to recover it. 00:25:57.212 [2024-11-20 09:10:13.068737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.068757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.068842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.068862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.069010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.069031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.069119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.069139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.069308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.069327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.069407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.069428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.069514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.069532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.069621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.069641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.069798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.069819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.069968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.069990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.070087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.070107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.070196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.070216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.070380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.070401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.070557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.070577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.070656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.070676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.070767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.070787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.070944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.070974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.071060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.071079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.071165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.071183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.071351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.071372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.071461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.071479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.071630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.071651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.071751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.071773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.071923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.071943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.072115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.072138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.072289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.072311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.072407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.072427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.072620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.072641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.072795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.072816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.072909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.072928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.073113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.073135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.073284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.073305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.073391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.073410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.073563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.073584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.073666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.073686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.073765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.073784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.073885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.073905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.073994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.074015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.213 [2024-11-20 09:10:13.074182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.213 [2024-11-20 09:10:13.074204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.213 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.074305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.074325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.074427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.074448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.074596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.074617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.074707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.074726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.074812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.074832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.075003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.075025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.075174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.075196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.075272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.075291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.075376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.075395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.075538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.075558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.075634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.075653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.075749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.075768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.075928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.075955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.076037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.076058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.076271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.076292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.076478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.076499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.076588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.076608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.076763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.076785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.076960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.076982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.077079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.077099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.077330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.077351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.077463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.077484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.077561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.077582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.077661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.077681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.077887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.077908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.078006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.078027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.078174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.078194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.078292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.078311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.078409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.078432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.078526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.078545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.078705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.078725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.078825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.078844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.079016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.079038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.214 [2024-11-20 09:10:13.079185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.214 [2024-11-20 09:10:13.079206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.214 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.079290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.079309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.079410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.079429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.079514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.079533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.079701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.079722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.079820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.079841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.079923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.079941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.080096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.080117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.080285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.080306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.080398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.080416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.080583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.080604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.080751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.080773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.080862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.080886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.080986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.081007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.081097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.081118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.081262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.081282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.081375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.081395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.081493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.081513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.081597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.081616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.081766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.081787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.081865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.081884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.082055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.082075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.082240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.082261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.082344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.082363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.082450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.082469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.082581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.082603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.082764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.082784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.082869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.082889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.082982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.083002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.083215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.083236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.083314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.083334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.083539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.083560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.083646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.083666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.083877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.083899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.084058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.084080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.084165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.084186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.084288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.084307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.084452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.084473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.084572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.215 [2024-11-20 09:10:13.084591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.215 qpair failed and we were unable to recover it. 00:25:57.215 [2024-11-20 09:10:13.084683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.084707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.084863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.084885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.084973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.084993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.085147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.085169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.085332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.085353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.085517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.085538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.085648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.085669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.085770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.085790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.085944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.085992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.086076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.086096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.086178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.086198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.086308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.086333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.086478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.086498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.086712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.086733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.086894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.086915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.087078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.087100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.087204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.087230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.087408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.087429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.087584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.087605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.087754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.087773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.087889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.087912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.088143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.088165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.088337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.088358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.088646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.088668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.088910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.088931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.089129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.089152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.089366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.089388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.089629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.089650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.089806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.089827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.089931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.089977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.090137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.090158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.090345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.090366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.090470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.090489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.090646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.090667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.090753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.090772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.090868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.090888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.090997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.091018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.091102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.216 [2024-11-20 09:10:13.091121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.216 qpair failed and we were unable to recover it. 00:25:57.216 [2024-11-20 09:10:13.091268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.091288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.091436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.091457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.091616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.091637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.091730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.091749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.091900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.091922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.092018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.092038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.092255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.092276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.092513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.092534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.092755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.092775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.092935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.092961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.093111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.093131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.093225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.093245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.093326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.093345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.093506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.093528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.093679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.093699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.093854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.093876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.093987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.094007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.094108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.094130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.094223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.094243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.094391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.094411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.094566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.094586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.094673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.094692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.094781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.094800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.094890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.094909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.095009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.095029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.095118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.095137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.095223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.095243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.095406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.095428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.095579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.095599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.095756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.095777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.095942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.095976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.096069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.096089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.096199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.096220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.096363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.096382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.096525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.096545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.096700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.096720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.096796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.096815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.096969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.096991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.217 qpair failed and we were unable to recover it. 00:25:57.217 [2024-11-20 09:10:13.097154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.217 [2024-11-20 09:10:13.097175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.097326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.097347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.097562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.097583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.097726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.097746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.097853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.097876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.097973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.097994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.098148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.098169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.098320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.098341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.098531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.098551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.098649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.098669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.098746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.098765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.098922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.098943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.099141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.099162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.099314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.099335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.099434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.099452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.099607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.099628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.099789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.099810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.099983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.100004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.100094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.100114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.100272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.100296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.100481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.100501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.100654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.100676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.100868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.100902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.101148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.101180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.101304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.101337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.101455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.101489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.101605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.101626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.101795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.101816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.101965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.101988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.102220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.102241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.102406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.102427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.102527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.102553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.102797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.102818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.102987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.218 [2024-11-20 09:10:13.103010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.218 qpair failed and we were unable to recover it. 00:25:57.218 [2024-11-20 09:10:13.103092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.103111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.103198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.103217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.103320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.103340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.103483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.103504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.103606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.103626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.103789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.103810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.103962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.103984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.104076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.104096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.104195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.104221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.104386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.104407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.104501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.104520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.104745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.104767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.104981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.105005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.105118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.105140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.105289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.105310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.105396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.105415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.105579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.105599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.105697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.105718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.105928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.105954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.106064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.106084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.106244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.106267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.106359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.106380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.106489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.106511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.106615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.106635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.106724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.106743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.106839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.106858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.107016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.107036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.107274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.107295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.107391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.107410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.107622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.107643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.107744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.107763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.107975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.107996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.108142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.108162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.108269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.108288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.108437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.108458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.108630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.108651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.108808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.108830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.109021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.109042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.109147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.109169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.219 qpair failed and we were unable to recover it. 00:25:57.219 [2024-11-20 09:10:13.109332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.219 [2024-11-20 09:10:13.109352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.109450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.109470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.109620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.109640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.109866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.109887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.109987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.110009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.110094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.110115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.110223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.110243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.110433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.110466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.110580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.110611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.110833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.110865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.110996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.111017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.111115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.111136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.111232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.111253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.111406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.111427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.111597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.111631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.111754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.111787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.111987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.112020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.112147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.112168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.112254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.112274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.112431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.112452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.112614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.112635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.112809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.112830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.112989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.113011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.113224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.113256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.113427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.113458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.113696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.113717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.113808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.113829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.113983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.114004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.114198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.114230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.114334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.114366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.114634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.114667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.114770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.114790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.115027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.115048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.115173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.115193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.115290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.115311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.115555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.115575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.115686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.115706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.115795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.115816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.115916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.115937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.116186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.116208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.116365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.116386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.116493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.220 [2024-11-20 09:10:13.116516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.220 qpair failed and we were unable to recover it. 00:25:57.220 [2024-11-20 09:10:13.116686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.116718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.116901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.116932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.117070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.117102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.117280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.117311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.117435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.117466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.117645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.117678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.117857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.117889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.118078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.118098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.118291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.118324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.118444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.118476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.118749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.118780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.118966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.118988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.119148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.119181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.119452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.119483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.119599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.119630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.119767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.119799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.119977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.120018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.120117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.120137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.120325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.120345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.120502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.120521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.120676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.120697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.120915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.120935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.121037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.121057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.121170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.121191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.121298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.121320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.121560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.121580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.121748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.121772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.122004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.122026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.122176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.122196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.122284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.122304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.122398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.122421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.122595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.122615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.122819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.122851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.123036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.123069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.123309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.123341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.123521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.123553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.123687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.123720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.123892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.123924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.124124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.124145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.124328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.124360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.124607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.124639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.221 qpair failed and we were unable to recover it. 00:25:57.221 [2024-11-20 09:10:13.124742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.221 [2024-11-20 09:10:13.124773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.125060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.125082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.125192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.125213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.125440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.125471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.125599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.125631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.125824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.125855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.126049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.126070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.126183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.126204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.126367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.126387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.126468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.126488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.126605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.126625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.126716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.126736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.126954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.126975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.127143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.127163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.127312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.127332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.127420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.127440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.127655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.127676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.127838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.127858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.127968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.127990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.128105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.128126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.128215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.128235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.128405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.128426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.128605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.128625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.128720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.128741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.128970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.128991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.129206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.129228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.129377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.129397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.129544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.129564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.129714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.129734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.129816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.129837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.130000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.130022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.130188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.130210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.130393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.130426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.130612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.130643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.130825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.130857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.131044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.131065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.131156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.131175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.131270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.131291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.131449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.131470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.131632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.131652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.131759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.131780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.131886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.222 [2024-11-20 09:10:13.131906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.222 qpair failed and we were unable to recover it. 00:25:57.222 [2024-11-20 09:10:13.132026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.223 [2024-11-20 09:10:13.132048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.223 qpair failed and we were unable to recover it. 00:25:57.223 [2024-11-20 09:10:13.132199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.223 [2024-11-20 09:10:13.132219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.223 qpair failed and we were unable to recover it. 00:25:57.223 [2024-11-20 09:10:13.132303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.223 [2024-11-20 09:10:13.132323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.223 qpair failed and we were unable to recover it. 00:25:57.223 [2024-11-20 09:10:13.132469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.223 [2024-11-20 09:10:13.132490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.223 qpair failed and we were unable to recover it. 00:25:57.223 [2024-11-20 09:10:13.132640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.223 [2024-11-20 09:10:13.132672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.223 qpair failed and we were unable to recover it. 00:25:57.223 [2024-11-20 09:10:13.132806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.223 [2024-11-20 09:10:13.132837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.223 qpair failed and we were unable to recover it. 00:25:57.223 [2024-11-20 09:10:13.133036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.223 [2024-11-20 09:10:13.133069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.223 qpair failed and we were unable to recover it. 00:25:57.223 [2024-11-20 09:10:13.133265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.223 [2024-11-20 09:10:13.133298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.223 qpair failed and we were unable to recover it. 00:25:57.223 [2024-11-20 09:10:13.133496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.223 [2024-11-20 09:10:13.133528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.223 qpair failed and we were unable to recover it. 00:25:57.223 [2024-11-20 09:10:13.133646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.223 [2024-11-20 09:10:13.133677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.223 qpair failed and we were unable to recover it. 00:25:57.223 [2024-11-20 09:10:13.133929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.223 [2024-11-20 09:10:13.133981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.223 qpair failed and we were unable to recover it. 00:25:57.223 [2024-11-20 09:10:13.134213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.223 [2024-11-20 09:10:13.134260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.223 qpair failed and we were unable to recover it. 00:25:57.223 [2024-11-20 09:10:13.134395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.223 [2024-11-20 09:10:13.134426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.223 qpair failed and we were unable to recover it. 00:25:57.223 [2024-11-20 09:10:13.134621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.223 [2024-11-20 09:10:13.134653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.223 qpair failed and we were unable to recover it. 00:25:57.223 [2024-11-20 09:10:13.134827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.223 [2024-11-20 09:10:13.134859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.223 qpair failed and we were unable to recover it. 00:25:57.223 [2024-11-20 09:10:13.135041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.223 [2024-11-20 09:10:13.135075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.223 qpair failed and we were unable to recover it. 00:25:57.223 [2024-11-20 09:10:13.135192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.223 [2024-11-20 09:10:13.135223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.223 qpair failed and we were unable to recover it. 00:25:57.223 [2024-11-20 09:10:13.135339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.223 [2024-11-20 09:10:13.135371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.223 qpair failed and we were unable to recover it. 00:25:57.223 [2024-11-20 09:10:13.135544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.223 [2024-11-20 09:10:13.135575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.223 qpair failed and we were unable to recover it. 00:25:57.223 [2024-11-20 09:10:13.135768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.223 [2024-11-20 09:10:13.135800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.223 qpair failed and we were unable to recover it. 00:25:57.223 [2024-11-20 09:10:13.136018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.223 [2024-11-20 09:10:13.136040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.223 qpair failed and we were unable to recover it. 00:25:57.223 [2024-11-20 09:10:13.136144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.223 [2024-11-20 09:10:13.136164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.223 qpair failed and we were unable to recover it. 00:25:57.223 [2024-11-20 09:10:13.136309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.223 [2024-11-20 09:10:13.136330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.223 qpair failed and we were unable to recover it. 00:25:57.223 [2024-11-20 09:10:13.136496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.223 [2024-11-20 09:10:13.136517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.223 qpair failed and we were unable to recover it. 00:25:57.223 [2024-11-20 09:10:13.136732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.223 [2024-11-20 09:10:13.136753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.223 qpair failed and we were unable to recover it. 00:25:57.223 [2024-11-20 09:10:13.136906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.223 [2024-11-20 09:10:13.136926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.223 qpair failed and we were unable to recover it. 00:25:57.223 [2024-11-20 09:10:13.137105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.223 [2024-11-20 09:10:13.137127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.223 qpair failed and we were unable to recover it. 00:25:57.223 [2024-11-20 09:10:13.137220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.223 [2024-11-20 09:10:13.137241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.223 qpair failed and we were unable to recover it. 00:25:57.223 [2024-11-20 09:10:13.137347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.223 [2024-11-20 09:10:13.137368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.223 qpair failed and we were unable to recover it. 00:25:57.223 [2024-11-20 09:10:13.137451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.223 [2024-11-20 09:10:13.137472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.223 qpair failed and we were unable to recover it. 00:25:57.223 [2024-11-20 09:10:13.137617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.223 [2024-11-20 09:10:13.137637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.223 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.137729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.224 [2024-11-20 09:10:13.137766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.224 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.138030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.224 [2024-11-20 09:10:13.138065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.224 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.138243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.224 [2024-11-20 09:10:13.138274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.224 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.138460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.224 [2024-11-20 09:10:13.138492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.224 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.138626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.224 [2024-11-20 09:10:13.138658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.224 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.138826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.224 [2024-11-20 09:10:13.138845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.224 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.139001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.224 [2024-11-20 09:10:13.139022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.224 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.139175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.224 [2024-11-20 09:10:13.139199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.224 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.139436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.224 [2024-11-20 09:10:13.139456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.224 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.139612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.224 [2024-11-20 09:10:13.139633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.224 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.139743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.224 [2024-11-20 09:10:13.139764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.224 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.139975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.224 [2024-11-20 09:10:13.139996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.224 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.140097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.224 [2024-11-20 09:10:13.140118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.224 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.140315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.224 [2024-11-20 09:10:13.140348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.224 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.140464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.224 [2024-11-20 09:10:13.140495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.224 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.140710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.224 [2024-11-20 09:10:13.140730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.224 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.140888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.224 [2024-11-20 09:10:13.140908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.224 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.141026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.224 [2024-11-20 09:10:13.141059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.224 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.141185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.224 [2024-11-20 09:10:13.141216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.224 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.141335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.224 [2024-11-20 09:10:13.141366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.224 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.141535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.224 [2024-11-20 09:10:13.141567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.224 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.141771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.224 [2024-11-20 09:10:13.141803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.224 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.141929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.224 [2024-11-20 09:10:13.141974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.224 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.142160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.224 [2024-11-20 09:10:13.142181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.224 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.142284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.224 [2024-11-20 09:10:13.142304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.224 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.142480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.224 [2024-11-20 09:10:13.142500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.224 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.142662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.224 [2024-11-20 09:10:13.142693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.224 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.142868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.224 [2024-11-20 09:10:13.142899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.224 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.143059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.224 [2024-11-20 09:10:13.143093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.224 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.143217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.224 [2024-11-20 09:10:13.143238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.224 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.143404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.224 [2024-11-20 09:10:13.143436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.224 qpair failed and we were unable to recover it. 00:25:57.224 [2024-11-20 09:10:13.143546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.143578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.143748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.143780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.143968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.143989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.144183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.144220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.144422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.144454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.144641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.144673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.144908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.144929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.145022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.145048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.145299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.145320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.145471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.145502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.145709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.145740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.145922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.145965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.146138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.146158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.146253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.146273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.146377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.146398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.146555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.146574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.146794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.146826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.146976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.147012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.147128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.147161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.147332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.147364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.147553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.147585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.147772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.147792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.147954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.147975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.148075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.148097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.148185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.148206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.148464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.148495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.148618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.148651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.148767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.148798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.149032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.149053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.149154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.149176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.149353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.149385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.149502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.149535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.149723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.149755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.149876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.149907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.150042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.150063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.150221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.150242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.150396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.150416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.150563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.150583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.225 qpair failed and we were unable to recover it. 00:25:57.225 [2024-11-20 09:10:13.150705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.225 [2024-11-20 09:10:13.150738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.150845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.150876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.151061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.151083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.151245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.151266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.151504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.151525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.151620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.151640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.151895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.151927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.152055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.152088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.152278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.152310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.152499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.152530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.152709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.152740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.152937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.152981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.153114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.153134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.153221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.153241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.153337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.153356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.153537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.153557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.153707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.153727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.153826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.153856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.153988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.154021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.154193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.154224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.154453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.154485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.154671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.154702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.154836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.154867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.154984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.155016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.155217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.155237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.155477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.155508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.155764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.155795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.155923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.155964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.156202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.156233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.156493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.156524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.156646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.156677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.156800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.156831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.156932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.156973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.157214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.157237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.157405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.157437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.157672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.157703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.157891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.157922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.158115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.226 [2024-11-20 09:10:13.158136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.226 qpair failed and we were unable to recover it. 00:25:57.226 [2024-11-20 09:10:13.158232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.158251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.158423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.158443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.158658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.158690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.158892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.158912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.159051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.159084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.159208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.159240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.159378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.159409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.159583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.159614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.159787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.159818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.160053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.160086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.160263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.160284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.160439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.160459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.160555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.160575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.160845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.160866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.161028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.161049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.161200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.161220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.161336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.161374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.161496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.161529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.161659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.161690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.161907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.161939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.162143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.162175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.162300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.162330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.162462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.162499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.162737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.162768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.162884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.162915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.163033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.163065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.163311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.163331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.163473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.163493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.163712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.163744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.163930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.163973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.164154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.164186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.164450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.164481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.164759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.164790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.164929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.164969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.165107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.165138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.165413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.165444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.165588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.165619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.165815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.165846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.227 [2024-11-20 09:10:13.165976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.227 [2024-11-20 09:10:13.165997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.227 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.166087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.166107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.166256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.166277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.166369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.166389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.166630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.166651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.166893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.166926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.167133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.167166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.167343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.167375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.167578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.167609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.167725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.167756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.167924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.167963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.168081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.168101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.168209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.168229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.168395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.168415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.168566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.168598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.168808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.168839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.169050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.169083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.169264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.169284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.169384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.169403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.169498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.169518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.169731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.169751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.169907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.169927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.170009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.170031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.170193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.170212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.170304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.170324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.170540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.170571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.170755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.170787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.170970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.171003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.171287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.171327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.171507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.171538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.171799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.171840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.171957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.171979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.228 [2024-11-20 09:10:13.172149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.228 [2024-11-20 09:10:13.172182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.228 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.172349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.172381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.172616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.172647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.172781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.172801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.172945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.173005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.173243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.173275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.173450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.173481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.173759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.173791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.173896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.173916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.174017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.174038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.174125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.174146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.174356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.174376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.174605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.174637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.174828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.174860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.175051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.175083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.175258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.175291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.175500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.175532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.175719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.175750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.176000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.176021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.176166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.176186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.176354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.176391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.176566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.176597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.176859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.176899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.177003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.177024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.177132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.177152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.177395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.177415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.177573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.177592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.177782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.177802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.177981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.178013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.178193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.178224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.178344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.178376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.178561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.178592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.178762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.178793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.178931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.178974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.179150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.179181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.179354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.179385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.179670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.179702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.179835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.229 [2024-11-20 09:10:13.179866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.229 qpair failed and we were unable to recover it. 00:25:57.229 [2024-11-20 09:10:13.180063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.180095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.180282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.180315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.180551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.180582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.180845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.180875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.181123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.181144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.181329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.181349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.181586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.181618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.181860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.181892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.182028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.182049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.182206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.182230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.182382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.182403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.182567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.182588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.182734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.182754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.182876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.182908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.183031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.183063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.183329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.183361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.183548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.183579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.183817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.183848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.184101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.184135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.184322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.184352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.184471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.184502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.184633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.184665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.184854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.184885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.185044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.185081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.185275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.185308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.185481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.185512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.185697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.185729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.185984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.186016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.186237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.186257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.186507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.186527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.186682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.186701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.186884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.186904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.187072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.187105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.187368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.187400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.187583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.187614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.187741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.187772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.188030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.230 [2024-11-20 09:10:13.188069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.230 qpair failed and we were unable to recover it. 00:25:57.230 [2024-11-20 09:10:13.188198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.231 [2024-11-20 09:10:13.188231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.231 qpair failed and we were unable to recover it. 00:25:57.231 [2024-11-20 09:10:13.188434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.231 [2024-11-20 09:10:13.188466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.231 qpair failed and we were unable to recover it. 00:25:57.231 [2024-11-20 09:10:13.188721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.231 [2024-11-20 09:10:13.188752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.231 qpair failed and we were unable to recover it. 00:25:57.231 [2024-11-20 09:10:13.189022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.231 [2024-11-20 09:10:13.189055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.231 qpair failed and we were unable to recover it. 00:25:57.231 [2024-11-20 09:10:13.189162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.231 [2024-11-20 09:10:13.189194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.231 qpair failed and we were unable to recover it. 00:25:57.231 [2024-11-20 09:10:13.189443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.231 [2024-11-20 09:10:13.189474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.231 qpair failed and we were unable to recover it. 00:25:57.231 [2024-11-20 09:10:13.189612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.231 [2024-11-20 09:10:13.189644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.231 qpair failed and we were unable to recover it. 00:25:57.231 [2024-11-20 09:10:13.189875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.231 [2024-11-20 09:10:13.189895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.231 qpair failed and we were unable to recover it. 00:25:57.231 [2024-11-20 09:10:13.190050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.526 [2024-11-20 09:10:13.190073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.526 qpair failed and we were unable to recover it. 00:25:57.526 [2024-11-20 09:10:13.190291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.526 [2024-11-20 09:10:13.190312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.526 qpair failed and we were unable to recover it. 00:25:57.526 [2024-11-20 09:10:13.190477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.526 [2024-11-20 09:10:13.190497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.526 qpair failed and we were unable to recover it. 00:25:57.526 [2024-11-20 09:10:13.190653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.526 [2024-11-20 09:10:13.190673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.526 qpair failed and we were unable to recover it. 00:25:57.526 [2024-11-20 09:10:13.190769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.526 [2024-11-20 09:10:13.190790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.526 qpair failed and we were unable to recover it. 00:25:57.526 [2024-11-20 09:10:13.191007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.526 [2024-11-20 09:10:13.191028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.526 qpair failed and we were unable to recover it. 00:25:57.526 [2024-11-20 09:10:13.191172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.526 [2024-11-20 09:10:13.191193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.526 qpair failed and we were unable to recover it. 00:25:57.526 [2024-11-20 09:10:13.191347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.526 [2024-11-20 09:10:13.191367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.526 qpair failed and we were unable to recover it. 00:25:57.526 [2024-11-20 09:10:13.191612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.526 [2024-11-20 09:10:13.191633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.526 qpair failed and we were unable to recover it. 00:25:57.526 [2024-11-20 09:10:13.191804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.526 [2024-11-20 09:10:13.191824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.526 qpair failed and we were unable to recover it. 00:25:57.526 [2024-11-20 09:10:13.191945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.526 [2024-11-20 09:10:13.191974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.526 qpair failed and we were unable to recover it. 00:25:57.526 [2024-11-20 09:10:13.192125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.526 [2024-11-20 09:10:13.192145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.526 qpair failed and we were unable to recover it. 00:25:57.526 [2024-11-20 09:10:13.192224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.526 [2024-11-20 09:10:13.192244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.526 qpair failed and we were unable to recover it. 00:25:57.526 [2024-11-20 09:10:13.192340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.526 [2024-11-20 09:10:13.192361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.526 qpair failed and we were unable to recover it. 00:25:57.526 [2024-11-20 09:10:13.192464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.526 [2024-11-20 09:10:13.192484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.526 qpair failed and we were unable to recover it. 00:25:57.526 [2024-11-20 09:10:13.192574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.526 [2024-11-20 09:10:13.192594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.526 qpair failed and we were unable to recover it. 00:25:57.526 [2024-11-20 09:10:13.192740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.526 [2024-11-20 09:10:13.192761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.526 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.192995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.193016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.193124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.193144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.193252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.193273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.193362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.193382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.193470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.193490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.193585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.193606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.193760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.193780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.193936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.193961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.194130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.194151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.194317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.194337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.194449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.194469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.194616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.194637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.194726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.194747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.194844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.194864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.195020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.195042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.195125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.195149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.195322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.195343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.195512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.195533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.195715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.195736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.195821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.195841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.196013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.196034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.196195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.196215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.196373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.196393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.196648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.196668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.196771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.196792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.196908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.196939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.197088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.197120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.197360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.197392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.197514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.197545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.197743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.197775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.197875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.197894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.197982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.198003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.198105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.198125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.198307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.198327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.198472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.198493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.198604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.198625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.198721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.198741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.527 [2024-11-20 09:10:13.198961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.527 [2024-11-20 09:10:13.198994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.527 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.199208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.199240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.199431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.199462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.199591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.199622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.199856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.199887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.200087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.200125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.200307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.200339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.200512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.200543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.200825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.200857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.200982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.201016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.201237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.201269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.201398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.201430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.201612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.201644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.201841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.201873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.202065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.202098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.202234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.202265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.202529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.202560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.202765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.202797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.202915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.202955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.203148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.203180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.203440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.203471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.203656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.203687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.203933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.203975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.204209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.204229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.204400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.204420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.204568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.204589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.204754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.204774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.204994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.205027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.205159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.205191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.205430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.205461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.205570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.205602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.205720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.205751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.205919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.205942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.206184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.206204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.206388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.206409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.206553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.206573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.206717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.206737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.206901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.206922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.207071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.528 [2024-11-20 09:10:13.207092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.528 qpair failed and we were unable to recover it. 00:25:57.528 [2024-11-20 09:10:13.207246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.207266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.207458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.207489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.207691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.207723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.207908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.207941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.208131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.208152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.208322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.208342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.208497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.208517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.208690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.208710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.208876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.208897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.209083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.209117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.209263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.209294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.209493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.209524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.209713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.209744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.210009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.210042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.210229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.210249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.210418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.210450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.210641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.210672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.210928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.210969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.211238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.211259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.211362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.211382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.211645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.211676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.211819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.211851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.212039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.212071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.212198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.212218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.212390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.212410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.212566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.212586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.212747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.212779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.212884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.212915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.213163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.213195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.213321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.213356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.213447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.213467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.213632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.213653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.213838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.213858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.213952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.213973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.214149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.214182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.214369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.214401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.214572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.214603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.214803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.529 [2024-11-20 09:10:13.214834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.529 qpair failed and we were unable to recover it. 00:25:57.529 [2024-11-20 09:10:13.215104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.215138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.215337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.215367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.215502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.215533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.215704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.215735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.215865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.215900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.216067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.216088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.216260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.216293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.216485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.216516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.216709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.216741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.216850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.216882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.217070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.217103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.217231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.217263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.217377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.217398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.217586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.217617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.217858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.217890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.218079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.218112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.218240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.218260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.218458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.218489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.218598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.218629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.218812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.218842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.219017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.219038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.219188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.219208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.219297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.219317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.219503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.219526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.219782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.219815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.220075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.220108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.220295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.220315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.220389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.220408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.220623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.220643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.220758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.220779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.220961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.220994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.221163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.221194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.221411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.221443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.221627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.221659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.221836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.221867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.222044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.222065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.222278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.222298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.222463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.530 [2024-11-20 09:10:13.222495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.530 qpair failed and we were unable to recover it. 00:25:57.530 [2024-11-20 09:10:13.222734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.222766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.222895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.222926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.223113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.223133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.223247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.223269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.223359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.223379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.223487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.223508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.223657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.223678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.223894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.223926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.224050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.224083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.224346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.224376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.224643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.224675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.224892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.224923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.225080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.225118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.225399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.225430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.225533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.225563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.225748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.225779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.226031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.226052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.226291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.226324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.226523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.226556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.226723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.226755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.227013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.227034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.227219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.227250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.227451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.227483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.227746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.227777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.227915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.227954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.228165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.228197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.228378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.228410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.228675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.228707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.228912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.228944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.229229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.229268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.229422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.229441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.229535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.229554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.531 [2024-11-20 09:10:13.229644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.531 [2024-11-20 09:10:13.229664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.531 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.229852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.229872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.230035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.230056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.230141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.230161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.230379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.230411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.230647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.230679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.230914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.230955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.231214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.231238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.231413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.231432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.231581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.231611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.231806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.231838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.232034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.232066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.232328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.232359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.232601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.232633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.232740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.232771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.232905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.232944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.233127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.233148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.233240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.233279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.233516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.233548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.233741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.233773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.234010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.234043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.234228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.234260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.234435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.234467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.234639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.234671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.234847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.234877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.235057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.235078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.235220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.235240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.235350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.235371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.235531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.235552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.235700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.235732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.235911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.235943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.236149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.236181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.236373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.236393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.236569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.236601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.236776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.236808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.236992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.237025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.237202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.237222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.237413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.532 [2024-11-20 09:10:13.237446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.532 qpair failed and we were unable to recover it. 00:25:57.532 [2024-11-20 09:10:13.237700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.237732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.237975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.238009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.238142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.238177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.238438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.238458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.238621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.238641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.238805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.238837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.239013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.239045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.239164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.239196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.239397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.239429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.239558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.239590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.239705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.239742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.239928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.239968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.240233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.240265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.240398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.240428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.240602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.240633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.240813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.240845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.240986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.241018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.241197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.241229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.241417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.241448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.241709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.241740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.241929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.241972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.242168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.242200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.242452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.242471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.242684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.242704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.242869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.242889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.242971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.242992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.243160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.243181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.243335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.243368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.243651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.243683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.243809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.243840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.243960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.243993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.244163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.244194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.244401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.244440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.244536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.244556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.244817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.244837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.245024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.245045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.245212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.245244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.245494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.245531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.533 [2024-11-20 09:10:13.245715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.533 [2024-11-20 09:10:13.245746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.533 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.246008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.246040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.246276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.246307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.246517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.246548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.246718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.246750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.246879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.246910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.247187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.247220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.247403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.247434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.247612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.247643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.247831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.247863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.247989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.248022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.248144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.248176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.248362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.248394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.248510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.248541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.248720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.248751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.248875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.248907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.249118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.249150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.249318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.249350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.249537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.249570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.249694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.249725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.249906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.249938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.250144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.250176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.250359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.250401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.250491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.250511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.250745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.250777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.250966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.250998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.251258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.251300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.251515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.251535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.251691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.251711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.251811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.251831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.251916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.251936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.252086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.252107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.252294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.252315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.252424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.252444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.252537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.252557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.252720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.252741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.252894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.252925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.534 [2024-11-20 09:10:13.253066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.534 [2024-11-20 09:10:13.253098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.534 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.253303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.253334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.253474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.253494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.253666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.253698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.253826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.253858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.254118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.254154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.254424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.254445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.254626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.254646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.254869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.254901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.255147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.255180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.255355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.255387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.255593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.255613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.255779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.255798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.255969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.256001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.256264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.256296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.256511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.256543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.256830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.256862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.257007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.257040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.257152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.257172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.257277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.257298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.257473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.257493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.257588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.257609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.257780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.257800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.257903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.257923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.258100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.258133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.258322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.258354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.258475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.258507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.258743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.258775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.258904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.258936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.259070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.259102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.259327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.259398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.259603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.259640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.259890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.259923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.260111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.535 [2024-11-20 09:10:13.260135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.535 qpair failed and we were unable to recover it. 00:25:57.535 [2024-11-20 09:10:13.260369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.260401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.260612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.260644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.260846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.260878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.261127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.261147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.261297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.261318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.261414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.261435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.261535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.261555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.261733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.261766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.261938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.261991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.262227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.262258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.262442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.262463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.262633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.262665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.262869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.262901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.263084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.263117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.263349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.263368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.263604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.263624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.263843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.263863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.263965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.263986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.264141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.264161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.264318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.264339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.264563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.264594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.264706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.264737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.264926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.264965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.265214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.265251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.265425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.265455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.265575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.265607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.265802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.265833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.266002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.266034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.266221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.266241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.266336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.266356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.266563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.266583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.266750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.266782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.266908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.266940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.267066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.267097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.536 [2024-11-20 09:10:13.267205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.536 [2024-11-20 09:10:13.267237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.536 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.267481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.267501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.267584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.267604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.267765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.267786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.267929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.267976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.268096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.268128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.268341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.268373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.268493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.268525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.268729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.268759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.268978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.269012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.269195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.269227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.269343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.269375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.269626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.269657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.269844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.269875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.270046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.270079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.270196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.270226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.270417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.270440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.270596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.270616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.270788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.270820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.270990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.271022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.271154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.271186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.271322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.271342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.271488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.271525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.271787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.271820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.271943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.271984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.272157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.272177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.272361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.272382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.272534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.272555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.272635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.272655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.272819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.272839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.272969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.272991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.273164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.273185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.273331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.273351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.273508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.273528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.273616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.273636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.273716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.273735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.537 [2024-11-20 09:10:13.273823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.537 [2024-11-20 09:10:13.273844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.537 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.273963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.273984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.274145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.274165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.274417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.274437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.274626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.274647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.274749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.274782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.274890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.274923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.275070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.275109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.275290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.275321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.275548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.275568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.275726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.275746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.275847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.275867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.275973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.275994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.276227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.276248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.276414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.276446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.276701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.276732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.276906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.276938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.277070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.277102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.277290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.277322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.277464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.277495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.277704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.277735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.277856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.277888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.278105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.278125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.278277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.278297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.278529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.278550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.278718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.278739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.278833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.278853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.279016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.279037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.279142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.279162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.279323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.279343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.279462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.279482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.279573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.279593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.279745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.279765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.279931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.279979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.280161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.280192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.280479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.280510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.280755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.280786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.280897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.280929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.281070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.538 [2024-11-20 09:10:13.281103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.538 qpair failed and we were unable to recover it. 00:25:57.538 [2024-11-20 09:10:13.281293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.281325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.281522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.281555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.281728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.281760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.281961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.281994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.282256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.282277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.282440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.282461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.282560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.282592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.282775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.282807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.283004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.283037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.283231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.283252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.283337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.283357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.283524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.283545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.283636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.283656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.283800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.283821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.283972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.283993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.284083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.284123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.284255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.284286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.284459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.284491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.284688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.284720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.284967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.285000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.285103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.285123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.285282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.285302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.285451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.285470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.285591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.285623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.285822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.285853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.286039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.286072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.286314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.286346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.286541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.286573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.286782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.286814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.286989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.287021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.287217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.287237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.287406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.287426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.287591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.287622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.287831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.287863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.288079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.288112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.288220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.288240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.288438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.288476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.288680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.288711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.539 qpair failed and we were unable to recover it. 00:25:57.539 [2024-11-20 09:10:13.288841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.539 [2024-11-20 09:10:13.288872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.289060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.289093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.289196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.289227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.289408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.289438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.289620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.289651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.289765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.289796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.289925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.289983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.290171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.290202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.290371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.290403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.290514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.290534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.290700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.290732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.290943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.290985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.291122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.291155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.291351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.291371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.291543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.291575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.291690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.291720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.291996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.292029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.292149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.292181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.292297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.292317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.292465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.292485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.292630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.292650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.292892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.292924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.293192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.293224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.293361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.293382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.293546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.293567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.293782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.293806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.293891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.293911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.294072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.294093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.294258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.294278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.294392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.294424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.294536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.294568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.294757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.294788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.294923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.294965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.540 [2024-11-20 09:10:13.295083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.540 [2024-11-20 09:10:13.295113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.540 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.295290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.295329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.295429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.295449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.295642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.295673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.295789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.295821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.295963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.295996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.296178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.296210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.296393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.296413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.296558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.296578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.296798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.296830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.296969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.297002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.297177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.297210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.297399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.297431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.297571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.297602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.297861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.297893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.298102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.298134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.298320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.298352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.298461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.298481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.298562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.298583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.298691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.298711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.298907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.298926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.299088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.299121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.299241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.299273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.299399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.299430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.299616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.299646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.299771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.299803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.299905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.299936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.300177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.300198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.300291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.300311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.300402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.300423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.300518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.300538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.300754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.300775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.300953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.300974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.301071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.301092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.301188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.301209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.301453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.301485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.301657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.301689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.301815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.301847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.301980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.541 [2024-11-20 09:10:13.302013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.541 qpair failed and we were unable to recover it. 00:25:57.541 [2024-11-20 09:10:13.302195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.302226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.302411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.302431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.302676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.302697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.302790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.302810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.303033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.303054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.303150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.303170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.303318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.303357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.303494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.303526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.303650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.303681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.303848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.303879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.304059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.304092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.304271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.304291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.304457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.304499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.304674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.304705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.304885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.304916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.305058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.305092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.305358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.305389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.305561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.305592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.305787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.305819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.305995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.306028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.306266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.306287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.306372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.306418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.306633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.306664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.306846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.306878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.307045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.307078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.307196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.307227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.307409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.307441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.307550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.307571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.307669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.307689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.307770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.307791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.308002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.308023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.308171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.308191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.308293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.308334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.308576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.308608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.308734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.308765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.308959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.308993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.309163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.309194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.309331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.309363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.542 [2024-11-20 09:10:13.309635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.542 [2024-11-20 09:10:13.309655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.542 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.309747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.309768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.309860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.309880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.310094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.310127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.310309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.310340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.310563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.310595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.310723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.310754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.310993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.311026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.311213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.311244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.311380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.311411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.311534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.311557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.311649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.311669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.311854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.311874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.312026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.312046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.312264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.312295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.312467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.312498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.312737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.312768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.313010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.313043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.313164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.313184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.313352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.313383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.313511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.313542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.313740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.313771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.313985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.314019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.314282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.314303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.314407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.314427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.314658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.314689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.314931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.314974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.315073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.315104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.315295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.315314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.315483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.315503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.315721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.315741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.315884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.315903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.315996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.316017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.316170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.316189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.316273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.316294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.316452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.316472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.316635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.316667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.316833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.316870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.317047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.543 [2024-11-20 09:10:13.317079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.543 qpair failed and we were unable to recover it. 00:25:57.543 [2024-11-20 09:10:13.317247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.317267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.317434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.317464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.317589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.317620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.317878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.317909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.318044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.318076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.318203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.318234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.318474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.318514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.318690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.318709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.318880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.318913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.319033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.319065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.319251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.319282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.319548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.319580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.319700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.319731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.319992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.320024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.320134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.320165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.320405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.320425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.320529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.320549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.320695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.320715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.320881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.320901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.321117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.321138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.321294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.321315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.321474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.321505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.321634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.321664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.321851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.321883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.322017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.322049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.322178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.322217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.322372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.322392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.322482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.322501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.322742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.322762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.322925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.322945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.323104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.323125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.323217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.323236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.323321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.323348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.323495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.323514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.323673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.323704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.323809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.323840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.324093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.324125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.324238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.324257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.544 qpair failed and we were unable to recover it. 00:25:57.544 [2024-11-20 09:10:13.324342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.544 [2024-11-20 09:10:13.324363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.324545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.324564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.324649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.324668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.324765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.324785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.324936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.324962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.325120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.325140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.325359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.325379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.325541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.325561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.325653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.325673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.325851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.325882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.326170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.326203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.326394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.326431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.326632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.326652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.326741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.326761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.326942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.326984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.327264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.327297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.327418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.327450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.327627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.327647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.327740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.327761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.327974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.327995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.328251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.328271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.328431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.328452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.328634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.328655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.328757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.328777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.329009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.329030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.329188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.329209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.329357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.329388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.329576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.329609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.329876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.329913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.330234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.330267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.330460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.330491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.330620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.330651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.330754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.330786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.330907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.545 [2024-11-20 09:10:13.330938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.545 qpair failed and we were unable to recover it. 00:25:57.545 [2024-11-20 09:10:13.331215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.331248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.331469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.331489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.331657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.331676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.331837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.331857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.331933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.331960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.332109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.332130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.332280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.332300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.332475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.332495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.332693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.332714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.332886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.332918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.333051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.333083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.333315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.333357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.333455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.333474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.333625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.333645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.333752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.333773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.333918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.333938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.334096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.334117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.334283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.334303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.334485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.334517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.334630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.334660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.334866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.334898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.335038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.335082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.335205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.335236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.335365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.335397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.335508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.335528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.335688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.335730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.335904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.335935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.336158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.336189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.336367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.336386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.336543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.336575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.336810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.336841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.336969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.337003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.337215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.337247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.337362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.337393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.337569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.337601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.337784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.337815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.338006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.338038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.338281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.338301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.546 [2024-11-20 09:10:13.338379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.546 [2024-11-20 09:10:13.338400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.546 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.338640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.338660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.338771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.338791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.338968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.339002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.339226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.339257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.339425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.339457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.339627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.339646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.339804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.339824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.340018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.340038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.340204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.340224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.340374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.340395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.340490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.340509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.340677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.340698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.340875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.340896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.340975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.340995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.341161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.341182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.341348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.341380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.341559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.341589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.341769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.341800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.341996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.342028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.342177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.342211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.342391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.342423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.342545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.342575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.342742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.342773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.342970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.343004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.343214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.343252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.343346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.343366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.343560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.343592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.343724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.343755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.343880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.343911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.344185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.344218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.344464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.344483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.344590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.344610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.344852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.344883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.345076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.345108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.345302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.345335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.345520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.345539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.345701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.345721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.345833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.547 [2024-11-20 09:10:13.345853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.547 qpair failed and we were unable to recover it. 00:25:57.547 [2024-11-20 09:10:13.346005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.346026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.346182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.346213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.346476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.346507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.346631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.346663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.346768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.346799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.346932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.346973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.347110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.347142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.347327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.347358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.347527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.347547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.347770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.347801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.347923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.347963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.348143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.348174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.348285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.348308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.348521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.348552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.348658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.348690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.348807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.348838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.349037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.349070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.349246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.349277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.349567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.349599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.349772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.349804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.349926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.349977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.350160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.350192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.350313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.350344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.350577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.350597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.350746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.350766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.350924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.350963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.351164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.351196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.351328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.351359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.351622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.351662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.351827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.351847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.351967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.352000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.352102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.352133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.352247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.352278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.352414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.352446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.352582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.352601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.352747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.352767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.352849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.352869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.353020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.548 [2024-11-20 09:10:13.353041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.548 qpair failed and we were unable to recover it. 00:25:57.548 [2024-11-20 09:10:13.353186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.353207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.353361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.353385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.353529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.353548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.353709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.353730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.353894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.353916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.354027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.354048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.354215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.354234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.354337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.354370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.354487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.354518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.354698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.354729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.354844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.354875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.354999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.355032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.355217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.355248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.355387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.355419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.355607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.355638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.355762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.355794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.356086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.356118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.356355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.356386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.356505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.356536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.356648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.356678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.356883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.356903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.357000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.357021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.357134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.357154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.357308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.357340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.357535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.357566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.357812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.357844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.358105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.358139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.358309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.358341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.358519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.358543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.358780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.358800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.358904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.358935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.359087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.359118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.359235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.359267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.359379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.359417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.549 qpair failed and we were unable to recover it. 00:25:57.549 [2024-11-20 09:10:13.359673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.549 [2024-11-20 09:10:13.359693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.359837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.359857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.359954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.359975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.360133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.360153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.360316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.360337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.360555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.360587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.360773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.360804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.360928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.360970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.361164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.361196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.361334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.361365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.361566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.361597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.361863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.361894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.362037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.362070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.362255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.362287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.362516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.362536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.362695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.362715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.362878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.362899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.362992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.363013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.363163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.363184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.363355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.363386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.363573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.363605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.363869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.363901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.364156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.364189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.364319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.364339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.364436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.364457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.364646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.364665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.364838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.364870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.365117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.365150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.365251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.365282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.365402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.365422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.365521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.365541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.365625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.365645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.365785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.365805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.365971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.365991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.366206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.366227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.366450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.366482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.366674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.366705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.366830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.366861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.550 [2024-11-20 09:10:13.367029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.550 [2024-11-20 09:10:13.367062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.550 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.367351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.367383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.367622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.367653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.367910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.367941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.368087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.368118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.368323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.368355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.368471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.368502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.368630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.368650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.368792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.368812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.368923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.368943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.369130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.369151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.369315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.369335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.369433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.369452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.369554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.369574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.369734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.369754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.369911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.369943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.370080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.370112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.370316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.370347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.370531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.370551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.370773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.370805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.371064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.371098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.371283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.371315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.371554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.371585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.371784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.371814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.372053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.372090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.372217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.372249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.372450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.372481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.372610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.372643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.372804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.372824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.372987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.373007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.373165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.373185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.373267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.373286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.373457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.373494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.373607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.373639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.373770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.373801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.373976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.374010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.374180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.374210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.374328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.374358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.374544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.551 [2024-11-20 09:10:13.374576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.551 qpair failed and we were unable to recover it. 00:25:57.551 [2024-11-20 09:10:13.374732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.374752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.374836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.374875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.375070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.375103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.375385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.375428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.375524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.375545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.375737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.375769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.375969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.376001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.376180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.376212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.376337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.376357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.376453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.376473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.376651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.376672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.376831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.376851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.377069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.377108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.377249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.377280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.377396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.377428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.377550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.377588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.377784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.377805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.377901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.377921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.378020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.378040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.378146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.378167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.378251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.378271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.378460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.378480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.378646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.378684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.378798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.378830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.379003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.379035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.379214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.379246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.379368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.379399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.379593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.379613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.379859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.379879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.380030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.380050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.380216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.380236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.380420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.380440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.380628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.380648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.380731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.380751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.380867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.380888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.380994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.381015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.381100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.381120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.381270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.381289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.552 [2024-11-20 09:10:13.381459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.552 [2024-11-20 09:10:13.381498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.552 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.381627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.381659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.381788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.381820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.381945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.382004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.382118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.382148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.382258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.382289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.382467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.382499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.382672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.382692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.382787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.382806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.382884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.382904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.382983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.383004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.383168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.383189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.383288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.383309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.383546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.383566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.383717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.383737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.383892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.383912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.384088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.384120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.384320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.384352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.384567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.384598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.384726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.384746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.384987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.385008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.385251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.385287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.385461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.385492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.385708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.385740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.385874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.385905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.386041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.386073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.386268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.386300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.386470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.386501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.386614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.386645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.386822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.386855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.386997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.387031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.387207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.387239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.387432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.387463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.387597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.387629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.387812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.387832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.388058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.388091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.388207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.388238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.388496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.388533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.388699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.553 [2024-11-20 09:10:13.388719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.553 qpair failed and we were unable to recover it. 00:25:57.553 [2024-11-20 09:10:13.388874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.388894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.389050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.389070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.389292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.389313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.389457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.389481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.389644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.389674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.389848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.389879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.390017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.390050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.390161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.390200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.390350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.390370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.390617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.390648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.390831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.390862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.391034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.391067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.391383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.391415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.391618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.391649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.391853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.391884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.392071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.392105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.392217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.392248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.392373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.392405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.392671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.392691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.392774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.392794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.392894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.392914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.393017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.393058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.393164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.393196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.393323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.393354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.393604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.393635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.393804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.393835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.394013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.394045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.394240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.394272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.394476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.394508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.394698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.394730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.394898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.394934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.395203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.395236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.395437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.395458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.395624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.554 [2024-11-20 09:10:13.395655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.554 qpair failed and we were unable to recover it. 00:25:57.554 [2024-11-20 09:10:13.395838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.395869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.395988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.396021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.396193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.396225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.396406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.396437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.396621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.396652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.396817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.396837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.397015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.397048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.397268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.397299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.397412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.397443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.397556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.397594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.397760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.397780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.398039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.398059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.398213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.398244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.398372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.398404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.398592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.398624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.398819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.398839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.399023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.399057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.399167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.399198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.399485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.399526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.399631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.399651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.399901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.399921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.400006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.400026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.400184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.400205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.400438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.400476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.400612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.400644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.400837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.400868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.401047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.401081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.401269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.401299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.401419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.401451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.401559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.401590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.401763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.401802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.401961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.401982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.402093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.402114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.402344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.402380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.402573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.402604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.555 qpair failed and we were unable to recover it. 00:25:57.555 [2024-11-20 09:10:13.402737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.555 [2024-11-20 09:10:13.402768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.402879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.402910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.403140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.403206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.403467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.403505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.403679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.403712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.403882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.403914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.404133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.404166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.404380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.404412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.404695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.404726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.404993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.405027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.405267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.405298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.405508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.405540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.405659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.405691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.405938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.405978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.406193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.406225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.406353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.406395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.406585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.406617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.406878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.406902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.407105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.407126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.407231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.407252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.407417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.407438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.407558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.407579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.407683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.407715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.407890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.407922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.408059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.408091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.408205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.408236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.408406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.408437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.408555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.408575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.408809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.408829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.408934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.408960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.409123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.409154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.409355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.409387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.409572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.409603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.409736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.409757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.409997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.556 [2024-11-20 09:10:13.410030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.556 qpair failed and we were unable to recover it. 00:25:57.556 [2024-11-20 09:10:13.410148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.410178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.410361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.410392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.410495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.410526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.410792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.410823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.411019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.411053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.411288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.411319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.411551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.411571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.411737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.411760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.411908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.411927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.412102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.412135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.412310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.412342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.412546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.412578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.412835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.412855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.413017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.413039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.413133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.413172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.413353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.413385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.413626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.413658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.413888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.413908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.414125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.414146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.414249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.414269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.414358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.414378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.414531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.414563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.414753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.414784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.415018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.415051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.415160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.415192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.415388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.415419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.415594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.415626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.415814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.415845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.416107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.416139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.416310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.416341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.416465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.416497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.416674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.416705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.416965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.416997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.417127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.417158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.417349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.417379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.417652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.417684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.417917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.417937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.418105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.418125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.557 qpair failed and we were unable to recover it. 00:25:57.557 [2024-11-20 09:10:13.418339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.557 [2024-11-20 09:10:13.418371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.418553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.418584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.418767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.418799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.418943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.418973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.419063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.419083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.419234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.419254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.419408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.419428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.419584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.419604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.419708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.419728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.419873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.419893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.420082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.420119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.420314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.420347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.420542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.420574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.420684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.420716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.420967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.421001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.421122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.421154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.421388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.421420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.421621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.421652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.421823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.421855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.422025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.422059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.422196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.422227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.422341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.422373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.422500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.422532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.422725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.422757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.422960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.422994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.423258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.423290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.423530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.423562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.423689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.423721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.423903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.423935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.424180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.424212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.424394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.424427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.424686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.424710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.424819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.424840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.424995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.425016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.425172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.425191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.558 [2024-11-20 09:10:13.425289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.558 [2024-11-20 09:10:13.425310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.558 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.425471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.425491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.425656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.425676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.425785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.425817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.425996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.426028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.426287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.426318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.426502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.426533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.426699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.426718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.426878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.426897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.427075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.427106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.427212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.427243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.427422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.427454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.427666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.427686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.427795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.427826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.428087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.428119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.428296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.428334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.428527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.428547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.428653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.428683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.428944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.428985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.429222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.429254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.429370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.429401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.429535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.429566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.429672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.429702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.429871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.429902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.430154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.430186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.430422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.430454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.430643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.430673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.430922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.430980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.431116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.431148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.431326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.431396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.431577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.431645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.431787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.431823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.431997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.432033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.432161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.432193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.432391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.432423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.432627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.432659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.559 qpair failed and we were unable to recover it. 00:25:57.559 [2024-11-20 09:10:13.432849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.559 [2024-11-20 09:10:13.432882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.433147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.433181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.433306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.433337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.433625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.433657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.433921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.433960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.434141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.434163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.434324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.434361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.434548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.434579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.434758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.434788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.434997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.435017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.435126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.435157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.435341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.435374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.435557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.435589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.435767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.435787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.435942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.435996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.436266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.436297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.436479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.436511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.436679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.436709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.436895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.436927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.437122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.437154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.437340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.437370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.437499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.437530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.437795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.437826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.438011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.438042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.438283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.438314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.438556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.438576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.438735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.438754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.438981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.439015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.439199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.439231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.439438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.439469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.439599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.439628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.439746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.439777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.439967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.439999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.440203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.440237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.560 [2024-11-20 09:10:13.440346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.560 [2024-11-20 09:10:13.440377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.560 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.440565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.440602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.440761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.440782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.440942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.440967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.441046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.441066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.441225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.441244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.441417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.441447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.441571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.441603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.441844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.441875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.442059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.442091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.442288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.442320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.442509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.442528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.442641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.442672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.442853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.442888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.443042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.443075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.443264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.443296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.443429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.443461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.443724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.443755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.443886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.443917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.444135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.444169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.444286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.444318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.444433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.444463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.444603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.444635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.444809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.444830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.445049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.445082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.445254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.445285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.445414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.445445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.445630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.445661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.445772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.445791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.446011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.446033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.446308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.446341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.446457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.446488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.446761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.446799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.447040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.447060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.447213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.447234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.447346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.447367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.447522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.447542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.561 qpair failed and we were unable to recover it. 00:25:57.561 [2024-11-20 09:10:13.447708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.561 [2024-11-20 09:10:13.447740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.448001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.448034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.448226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.448257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.448476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.448510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.448702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.448733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.448970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.449003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.449128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.449160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.449403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.449435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.449688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.449720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.449905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.449936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.450086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.450118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.450243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.450275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.450559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.450590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.450700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.450731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.450919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.450959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.451083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.451114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.451401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.451443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.451632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.451664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.451772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.451794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.451976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.452009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.452132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.452164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.452372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.452403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.452510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.452542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.452722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.452753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.452873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.452904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.453179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.453212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.453381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.453412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.453591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.453622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.453762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.453793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.453924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.453965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.454168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.454202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.454390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.454421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.454684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.454716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.454896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.454927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.455131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.455164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.455344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.455376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.455511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.455542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.455787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.455819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.562 qpair failed and we were unable to recover it. 00:25:57.562 [2024-11-20 09:10:13.455941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.562 [2024-11-20 09:10:13.455984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.456087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.456118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.456314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.456346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.456533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.456565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.456753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.456784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.457028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.457067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.457242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.457274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.457464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.457487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.457660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.457681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.457783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.457804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.457961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.457981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.458087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.458119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.458246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.458278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.458467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.458497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.458631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.458662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.458902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.458921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.459114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.459136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.459351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.459372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.459542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.459573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.459699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.459731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.459905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.459937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.460069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.460101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.460356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.460388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.460576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.460597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.460782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.460801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.460881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.460900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.461016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.461038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.461205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.461236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.461425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.461456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.461629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.461662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.461851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.461871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.461960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.461981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.462170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.462195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.563 [2024-11-20 09:10:13.462350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.563 [2024-11-20 09:10:13.462381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.563 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.462550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.462581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.462767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.462799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.462969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.462989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.463212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.463243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.463367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.463398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.463513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.463544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.463753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.463785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.463902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.463934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.464200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.464231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.464329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.464360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.464542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.464573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.464761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.464793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.464980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.465016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.465153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.465185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.465353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.465384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.465518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.465549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.465793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.465825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.466085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.466118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.466374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.466409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.466533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.466565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.466757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.466789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.466974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.467010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.467228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.467260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.467466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.467498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.467760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.467792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.468033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.468071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.468256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.468288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.468490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.468512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.468681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.468713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.468905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.468937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.469158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.469190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.469368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.469400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.469588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.469620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.469856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.564 [2024-11-20 09:10:13.469887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.564 qpair failed and we were unable to recover it. 00:25:57.564 [2024-11-20 09:10:13.470079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.470112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.470286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.470318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.470512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.470542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.470667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.470707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.470785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.470804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.470972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.470993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.471145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.471165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.471263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.471282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.471381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.471400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.471508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.471528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.471625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.471646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.471793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.471823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.472108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.472142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.472263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.472294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.472480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.472511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.472642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.472673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.472881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.472912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.473116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.473135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.473233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.473253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.473440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.473459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.473544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.473563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.473673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.473693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.473910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.473929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.474111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.474131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.474236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.474256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.474347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.474367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.474454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.474474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.474565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.474585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.474693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.474714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.474871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.474901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.475115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.475147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.475352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.475383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.475586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.475623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.475801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.475831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.475969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.476002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.476136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.476168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.476453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.476484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.476685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.565 [2024-11-20 09:10:13.476716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.565 qpair failed and we were unable to recover it. 00:25:57.565 [2024-11-20 09:10:13.476844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.476876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.477066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.477099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.477271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.477303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.477513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.477545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.477789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.477821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.478082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.478115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.478307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.478338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.478457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.478495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.478750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.478782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.478986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.479019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.479207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.479238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.479418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.479449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.479719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.479751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.479974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.480007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.480252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.480283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.480469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.480491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.480709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.480729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.480966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.480999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.481123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.481155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.481279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.481309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.481430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.481462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.481603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.481633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.481880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.481912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.482111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.482143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.482377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.482407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.482538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.482557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.482743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.482774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.482979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.483011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.483158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.483191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.483399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.483429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.483548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.483579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.483797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.483830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.484105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.484126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.484282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.484302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.484401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.484424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.484578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.484598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.484674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.484693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.484888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.566 [2024-11-20 09:10:13.484919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.566 qpair failed and we were unable to recover it. 00:25:57.566 [2024-11-20 09:10:13.485167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.485239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.485451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.485486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.485736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.485768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.485961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.485994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.486171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.486203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.486322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.486353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.486477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.486509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.486618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.486650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.486825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.486856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.487017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.487039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.487280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.487312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.487501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.487532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.487717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.487749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.487986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.488019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.488147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.488178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.488364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.488395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.488510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.488541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.488661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.488691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.488891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.488922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.489091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.489110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.489205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.489225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.489318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.489337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.489440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.489460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.489622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.489646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.489836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.489867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.490063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.490095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.490335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.490367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.490603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.490624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.490861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.490881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.491096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.491116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.491218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.491237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.491401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.491421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.491526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.491545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.491760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.491780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.491962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.492006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.492123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.492154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.492338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.492369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.492496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.492526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.492692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.492713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.492900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.567 [2024-11-20 09:10:13.492932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.567 qpair failed and we were unable to recover it. 00:25:57.567 [2024-11-20 09:10:13.493145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.493176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.493319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.493349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.493519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.493549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.493733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.493764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.494049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.494082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.494208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.494240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.494355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.494385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.494577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.494608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.494843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.494873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.494989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.495010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.495238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.495270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.495491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.495522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.495637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.495667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.495846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.495865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.496014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.496035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.496217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.496248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.496432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.496462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.496665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.496696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.496958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.496979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.497092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.497112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.497224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.497244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.497331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.497350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.497508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.497550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.497796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.497827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.498008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.498042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.498165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.498196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.498375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.498406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.498587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.498617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.498829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.498850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.499065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.499097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.499292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.499323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.499444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.499474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.499593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.499625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.499856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.499877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.499971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.499992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.500065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.500084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.500319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.568 [2024-11-20 09:10:13.500338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.568 qpair failed and we were unable to recover it. 00:25:57.568 [2024-11-20 09:10:13.500498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.500529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.500739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.500771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.500960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.500993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.501164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.501184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.501341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.501360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.501513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.501554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.501744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.501777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.501889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.501920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.502100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.502131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.502304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.502334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.502511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.502542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.502727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.502758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.502935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.502961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.503118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.503137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.503245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.503269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.503528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.503548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.503627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.503646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.503829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.503860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.503986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.504020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.504206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.504238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.504419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.504449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.504624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.504655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.504938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.504980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.505161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.505192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.505382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.505413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.505583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.505603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.505702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.505722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.505804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.505823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.506003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.506042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.506147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.506178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.506296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.506326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.506506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.506536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.506772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.506792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.506978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.506999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.507103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.507134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.507319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.507351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.507489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.507520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.507643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.507674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.507929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.507955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.508179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.508198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.508357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.569 [2024-11-20 09:10:13.508377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.569 qpair failed and we were unable to recover it. 00:25:57.569 [2024-11-20 09:10:13.508631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.508668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.508881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.508913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.509043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.509064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.509168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.509188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.509400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.509420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.509573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.509608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.509799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.509829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.510024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.510057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.510249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.510281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.510452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.510482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.510679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.510710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.510837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.510867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.510979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.511011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.511185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.511216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.511423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.511454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.511639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.511669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.511857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.511887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.511986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.512007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.512093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.512112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.512218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.512238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.512415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.512445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.512628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.512659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.512846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.512878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.512996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.513017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.513156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.513175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.513271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.513291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.513522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.513543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.513741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.513782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.513900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.513930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.514175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.514207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.514392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.514424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.514640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.514671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.514866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.514898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.515099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.515120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.515279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.515310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.515503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.515534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.515713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.515743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.515857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.515877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.516051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.516071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.516280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.516299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.516536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.516556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.516670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.570 [2024-11-20 09:10:13.516690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.570 qpair failed and we were unable to recover it. 00:25:57.570 [2024-11-20 09:10:13.516771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.516791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.516964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.516996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.517167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.517198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.517305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.517336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.517546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.517577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.517830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.517850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.517993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.518012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.518197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.518228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.518339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.518371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.518583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.518614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.518804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.518835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.519102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.519134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.519367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.519397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.519599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.519632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.519737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.519766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.519975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.519995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.520158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.520178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.520333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.520353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.520439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.520458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.520605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.520625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.520836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.520856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.521001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.521022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.521121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.521162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.521346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.521378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.521559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.521589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.521848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.521868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.521972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.521993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.522149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.522168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.522364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.522384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.522486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.522506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.522677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.522707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.522896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.522926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.523111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.523141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.523333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.523365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.523541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.523572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.523681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.523700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.523936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.523961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.524158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.524177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.524431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.524452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.524595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.524615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.524713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.524732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.571 [2024-11-20 09:10:13.524838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.571 [2024-11-20 09:10:13.524859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.571 qpair failed and we were unable to recover it. 00:25:57.572 [2024-11-20 09:10:13.525002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.572 [2024-11-20 09:10:13.525021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.572 qpair failed and we were unable to recover it. 00:25:57.572 [2024-11-20 09:10:13.525125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.572 [2024-11-20 09:10:13.525146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.572 qpair failed and we were unable to recover it. 00:25:57.572 [2024-11-20 09:10:13.525373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.572 [2024-11-20 09:10:13.525404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.572 qpair failed and we were unable to recover it. 00:25:57.572 [2024-11-20 09:10:13.525642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.572 [2024-11-20 09:10:13.525673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.572 qpair failed and we were unable to recover it. 00:25:57.572 [2024-11-20 09:10:13.525879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.572 [2024-11-20 09:10:13.525917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.572 qpair failed and we were unable to recover it. 00:25:57.572 [2024-11-20 09:10:13.526024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.572 [2024-11-20 09:10:13.526044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.572 qpair failed and we were unable to recover it. 00:25:57.572 [2024-11-20 09:10:13.526258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.572 [2024-11-20 09:10:13.526276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.572 qpair failed and we were unable to recover it. 00:25:57.572 [2024-11-20 09:10:13.526423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.572 [2024-11-20 09:10:13.526444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.572 qpair failed and we were unable to recover it. 00:25:57.572 [2024-11-20 09:10:13.526684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.572 [2024-11-20 09:10:13.526704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.572 qpair failed and we were unable to recover it. 00:25:57.572 [2024-11-20 09:10:13.526847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.572 [2024-11-20 09:10:13.526867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.572 qpair failed and we were unable to recover it. 00:25:57.572 [2024-11-20 09:10:13.527032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.572 [2024-11-20 09:10:13.527053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.572 qpair failed and we were unable to recover it. 00:25:57.572 [2024-11-20 09:10:13.527152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.572 [2024-11-20 09:10:13.527175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.572 qpair failed and we were unable to recover it. 00:25:57.572 [2024-11-20 09:10:13.527330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.572 [2024-11-20 09:10:13.527350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.572 qpair failed and we were unable to recover it. 00:25:57.572 [2024-11-20 09:10:13.527505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.572 [2024-11-20 09:10:13.527525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.572 qpair failed and we were unable to recover it. 00:25:57.572 [2024-11-20 09:10:13.527682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.572 [2024-11-20 09:10:13.527712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.572 qpair failed and we were unable to recover it. 00:25:57.572 [2024-11-20 09:10:13.528011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.572 [2024-11-20 09:10:13.528043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.572 qpair failed and we were unable to recover it. 00:25:57.572 [2024-11-20 09:10:13.528176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.572 [2024-11-20 09:10:13.528207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.572 qpair failed and we were unable to recover it. 00:25:57.572 [2024-11-20 09:10:13.528393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.572 [2024-11-20 09:10:13.528424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.572 qpair failed and we were unable to recover it. 00:25:57.572 [2024-11-20 09:10:13.528606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.572 [2024-11-20 09:10:13.528635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.572 qpair failed and we were unable to recover it. 00:25:57.572 [2024-11-20 09:10:13.528772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.572 [2024-11-20 09:10:13.528802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.572 qpair failed and we were unable to recover it. 00:25:57.572 [2024-11-20 09:10:13.528987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.572 [2024-11-20 09:10:13.529020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.572 qpair failed and we were unable to recover it. 00:25:57.572 [2024-11-20 09:10:13.529202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.572 [2024-11-20 09:10:13.529221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.572 qpair failed and we were unable to recover it. 00:25:57.572 [2024-11-20 09:10:13.529425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.572 [2024-11-20 09:10:13.529445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.572 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.529657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.529678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.529774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.529793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.529960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.529982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.530219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.530239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.530398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.530417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.530512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.530532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.530643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.530662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.530742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.530761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.530916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.530935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.531035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.531055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.531265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.531285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.531431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.531452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.531652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.531671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.531824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.531844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.532058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.532080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.532160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.532183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.532338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.532359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.532456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.532477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.532654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.532674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.532831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.532852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.533008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.533030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.533205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.533224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.533462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.533482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.533592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.533613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.533827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.533847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.533996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.534016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.534263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.534283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.534361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.534381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.534594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.534615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.534770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.534790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.535049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.535069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.535233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.535252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.535411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.535432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.535646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.535666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.535846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.535866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.536041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.864 [2024-11-20 09:10:13.536062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.864 qpair failed and we were unable to recover it. 00:25:57.864 [2024-11-20 09:10:13.536164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.536184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.536394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.536415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.536496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.536515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.536609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.536629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.536793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.536824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.536946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.536999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.537184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.537216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.537422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.537454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.537573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.537604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.537873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.537903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.538075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.538095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.538244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.538265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.538435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.538455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.538689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.538709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.538859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.538879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.538961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.538982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.539127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.539146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.539312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.539345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.539520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.539550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.539672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.539701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.539916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.539974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.540105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.540136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.540336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.540368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.540644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.540675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.540855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.540896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.541144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.541165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.541260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.541279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.541526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.541557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.541677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.541706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.541990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.542024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.542143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.542162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.542307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.542347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.542535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.542566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.542704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.542735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.542873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.542904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.543090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.543121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.543247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.543278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.543391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.865 [2024-11-20 09:10:13.543422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.865 qpair failed and we were unable to recover it. 00:25:57.865 [2024-11-20 09:10:13.543598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.543629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.543740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.543770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.543992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.544013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.544162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.544183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.544370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.544390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.544487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.544507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.544667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.544687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.544795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.544815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.544919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.544938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.545119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.545157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.545273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.545304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.545515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.545546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.545807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.545838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.545965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.545996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.546102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.546133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.546332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.546363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.546543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.546574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.546686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.546705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.546791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.546812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.547043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.547064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.547277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.547307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.547428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.547459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.547629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.547661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.547907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.547927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.548092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.548113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.548272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.548292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.548460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.548492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.548665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.548696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.548811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.548841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.548966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.548997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.549106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.549126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.549203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.549222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.549381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.549413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.549541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.549571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.549840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.549871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.550146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.550167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.550318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.550342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.550505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.866 [2024-11-20 09:10:13.550537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.866 qpair failed and we were unable to recover it. 00:25:57.866 [2024-11-20 09:10:13.550718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.550749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.550997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.551029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.551209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.551241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.551528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.551560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.551797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.551828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.552086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.552118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.552247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.552278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.552519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.552551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.552732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.552764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.552890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.552919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.553037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.553057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.553213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.553232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.553489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.553519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.553640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.553671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.553932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.553975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.554144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.554163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.554395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.554426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.554596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.554627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.554809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.554841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.555026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.555057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.555191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.555222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.555414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.555445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.555666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.555697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.555934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.555966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.556125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.556144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.556379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.556417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.556598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.556629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.556873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.556902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.557093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.557124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.557387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.557417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.557619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.557650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.557848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.557868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.558108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.558130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.558226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.558246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.558406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.558426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.558575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.558607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.558785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.558805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.558882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.867 [2024-11-20 09:10:13.558901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.867 qpair failed and we were unable to recover it. 00:25:57.867 [2024-11-20 09:10:13.559007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.559028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.559211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.559232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.559414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.559433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.559578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.559597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.559847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.559878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.559999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.560032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.560241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.560272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.560444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.560474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.560643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.560674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.560855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.560886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.561058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.561090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.561271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.561302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.561477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.561508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.561701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.561731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.561906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.561926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.562019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.562038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.562186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.562206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.562302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.562321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.562476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.562495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.562734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.562765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.562880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.562911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.563125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.563158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.563385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.563405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.563627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.563647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.563802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.563822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.564084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.564118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.564301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.564331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.564578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.564608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.564794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.564814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.564923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.564944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.565047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.565068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.565224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.868 [2024-11-20 09:10:13.565264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.868 qpair failed and we were unable to recover it. 00:25:57.868 [2024-11-20 09:10:13.565385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.565415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.565656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.565688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.565917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.565938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.566131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.566151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.566294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.566314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.566468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.566489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.566703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.566735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.566997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.567029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.567201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.567220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.567473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.567503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.567766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.567797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.567908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.567939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.568124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.568144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.568364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.568397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.568566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.568597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.568724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.568753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.568961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.568982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.569146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.569178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.569417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.569450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.569684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.569714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.569822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.569842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.570084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.570105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.570208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.570227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.570399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.570446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.570619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.570650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.570889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.570921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.571114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.571135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.571227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.571246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.571349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.571368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.571585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.571604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.571798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.571830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.571983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.572016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.572214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.572246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.572378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.572409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.572595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.572626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.572818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.572837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.572955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.572976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.573077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.573097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.869 [2024-11-20 09:10:13.573339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.869 [2024-11-20 09:10:13.573371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.869 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.573550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.573580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.573698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.573730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.573858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.573889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.574150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.574183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.574420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.574451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.574710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.574741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.574996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.575017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.575120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.575140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.575295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.575315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.575473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.575492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.575586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.575607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.575757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.575780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.575941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.575973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.576187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.576207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.576315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.576334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.576412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.576432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.576643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.576664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.576817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.576837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.577006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.577038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.577243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.577275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.577377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.577409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.577532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.577564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.577823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.577855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.578025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.578057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.578178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.578209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.578401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.578431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.578571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.578602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.578718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.578749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.578984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.579017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.579194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.579214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.579409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.579429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.579583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.579603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.579748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.579768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.580003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.580024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.580134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.580154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.580249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.580269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.580358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.580377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.580477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.870 [2024-11-20 09:10:13.580497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.870 qpair failed and we were unable to recover it. 00:25:57.870 [2024-11-20 09:10:13.580592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.580612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.580724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.580744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.580888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.580907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.581063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.581096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.581273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.581305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.581426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.581456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.581571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.581604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.581839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.581870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.581977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.582009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.582118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.582149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.582383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.582414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.582588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.582619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.582802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.582833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.582959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.582991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.583397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.583433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.583621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.583653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.583860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.583894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.584122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.584154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.584349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.584380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.584488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.584517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.584629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.584659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.584791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.584811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.584993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.585013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.585158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.585178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.585343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.585363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.585454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.585473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.585686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.585706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.585861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.585881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.585980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.586000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.586147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.586168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.586323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.586342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.586554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.586573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.586665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.586705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.586823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.586854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.587040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.587072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.587249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.587279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.587419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.587449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.587567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.587599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.871 qpair failed and we were unable to recover it. 00:25:57.871 [2024-11-20 09:10:13.587708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.871 [2024-11-20 09:10:13.587738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.587907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.587927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.588080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.588101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.588261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.588284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.588518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.588539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.588631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.588651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.588754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.588774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.588955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.588976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.589063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.589082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.589314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.589346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.589608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.589639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.589753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.589784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.589977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.589997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.590170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.590189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.590360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.590380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.590596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.590629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.590794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.590825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.591066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.591098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.591280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.591300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.591536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.591556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.591715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.591745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.591848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.591879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.592054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.592087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.592322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.592342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.592500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.592520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.592799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.592832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.593026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.593057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.593242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.593273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.593467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.593498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.593765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.593797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.593979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.594008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.594166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.594186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.594369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.594390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.594557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.594577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.594675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.594696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.594776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.594796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.594977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.594996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.595170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.595190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.872 [2024-11-20 09:10:13.595361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.872 [2024-11-20 09:10:13.595393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.872 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.595605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.595637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.595763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.595793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.595966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.595986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.596070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.596091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.596280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.596299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.596416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.596434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.596644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.596664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.596827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.596860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.596964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.596995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.597187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.597219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.597333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.597365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.597625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.597656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.597838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.597870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.598124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.598157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.598334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.598354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.598514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.598544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.598734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.598766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.598935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.598975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.599110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.599134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.599327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.599357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.599485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.599516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.599625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.599655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.599889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.599919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.600154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.600213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.600492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.600527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.600740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.600773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.600969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.601002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.601199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.601231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.601421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.601452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.601703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.601733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.601905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.601937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.602070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.602102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.602291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.602324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.873 [2024-11-20 09:10:13.602562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.873 [2024-11-20 09:10:13.602595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.873 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.602707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.602738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.602996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.603030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.603166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.603198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.603320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.603352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.603637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.603669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.603845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.603876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.604115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.604149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.604278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.604310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.604594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.604627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.604806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.604829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.605040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.605061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.605233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.605271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.605451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.605482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.605743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.605773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.606036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.606069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.606374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.606406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.606532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.606563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.606741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.606772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.607029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.607049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.607162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.607182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.607281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.607301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.607513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.607533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.607635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.607654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.607898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.607930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.608091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.608124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.608378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.608409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.608669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.608702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.608968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.609002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.609188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.609220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.609505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.609535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.609718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.609749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.610010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.610044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.610220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.610240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.610407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.610437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.610631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.610662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.610796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.610826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.611004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.611025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.611127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.874 [2024-11-20 09:10:13.611146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.874 qpair failed and we were unable to recover it. 00:25:57.874 [2024-11-20 09:10:13.611381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.611405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.611569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.611589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.611732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.611752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.611854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.611873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.612021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.612042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.612159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.612190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.612367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.612399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.612680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.612711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.612897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.612929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.613115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.613146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.613268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.613298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.613477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.613507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.613622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.613654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.613864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.613886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.613980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.614001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.614153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.614173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.614336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.614356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.614531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.614551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.614707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.614726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.614887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.614908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.615055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.615075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.615332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.615352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.615497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.615516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.615610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.615644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.615829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.615860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.615991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.616022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.616209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.616241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.616421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.616451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.616641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.616673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.616858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.616889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.617142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.617177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.617361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.617380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.617592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.617612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.617855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.617886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.618081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.618113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.618304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.618335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.618522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.618553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.618761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.618792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.875 qpair failed and we were unable to recover it. 00:25:57.875 [2024-11-20 09:10:13.618993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.875 [2024-11-20 09:10:13.619015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.619226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.619245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.619430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.619450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.619687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.619720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.620002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.620035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.620309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.620342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.620551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.620582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.620778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.620810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.621085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.621118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.621303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.621335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.621512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.621543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.621784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.621815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.621997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.622019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.622264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.622296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.622470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.622502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.622779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.622811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.622982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.623016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.623205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.623237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.623419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.623439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.623546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.623566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.623709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.623729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.623826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.623845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.624071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.624092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.624336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.624368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.624621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.624653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.624855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.624897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.625138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.625159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.625325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.625346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.625581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.625601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.625814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.625844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.626098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.626122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.626358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.626391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.626674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.626705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.626895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.626926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.627187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.627220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.627347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.627378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.627619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.627650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.627912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.627932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.628172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.876 [2024-11-20 09:10:13.628193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.876 qpair failed and we were unable to recover it. 00:25:57.876 [2024-11-20 09:10:13.628379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.628398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.628623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.628643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.628853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.628872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.629018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.629040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.629146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.629165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.629330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.629350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.629501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.629521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.629662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.629682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.629857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.629878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.630084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.630117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.630402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.630435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.630573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.630604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.630817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.630848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.631026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.631059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.631226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.631256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.631520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.631540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.631765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.631784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.632022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.632043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.632278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.632314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.632603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.632633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.632766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.632796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.632975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.632996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.633171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.633203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.633514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.633545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.633748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.633779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.633974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.634007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.634197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.634229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.634409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.634429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.634645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.634677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.634939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.634999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.635206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.635225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.635450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.635470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.635689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.635709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.877 [2024-11-20 09:10:13.635893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.877 [2024-11-20 09:10:13.635912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.877 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.636150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.636182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.636365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.636396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.636571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.636601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.636803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.636835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.637008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.637030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.637284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.637316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.637486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.637517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.637633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.637663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.637782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.637813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.638102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.638136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.638387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.638419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.638714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.638752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.639013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.639047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.639339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.639371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.639645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.639676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.639887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.639918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.640122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.640155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.640420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.640440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.640688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.640720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.640977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.641010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.641205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.641236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.641451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.641482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.641772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.641804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.642068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.642089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.642321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.642352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.642647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.642680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.642874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.642905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.643164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.643197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.643377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.643397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.643495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.643515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.643750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.643771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.643858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.643877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.644123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.644155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.644463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.644496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.644746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.644777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.645076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.645110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.645369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.645389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.645621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.878 [2024-11-20 09:10:13.645641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.878 qpair failed and we were unable to recover it. 00:25:57.878 [2024-11-20 09:10:13.645907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.645939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.646235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.646267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.646454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.646474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.646684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.646704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.646864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.646884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.646993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.647014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.647252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.647272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.647516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.647549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.647789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.647820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.648084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.648106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.648331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.648351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.648516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.648536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.648757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.648789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.649052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.649086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.649330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.649362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.649625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.649657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.649897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.649929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.650178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.650210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.650452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.650484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.650614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.650645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.650766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.650796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.650930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.650978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.651196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.651216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.651376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.651396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.651496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.651516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.651661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.651681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.651925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.651945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.652187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.652207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.652329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.652350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.652514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.652533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.652631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.652651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.652825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.652845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.652966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.652987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.653144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.653164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.653317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.653336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.653575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.653607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.653801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.653833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.654121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.654155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.879 [2024-11-20 09:10:13.654427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.879 [2024-11-20 09:10:13.654458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.879 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.654718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.654750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.654998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.655031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.655227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.655264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.655550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.655590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.655852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.655883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.656091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.656123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.656301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.656322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.656594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.656614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.656818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.656838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.657059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.657080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.657317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.657338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.657575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.657595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.657750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.657769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.658015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.658049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.658296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.658328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.658584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.658615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.658830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.658861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.659107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.659139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.659427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.659468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.659752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.659784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.660049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.660082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.660363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.660411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.660675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.660706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.660992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.661033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.661186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.661206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.661371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.661404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.661663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.661695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.661981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.662021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.662263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.662284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.662388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.662412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.662570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.662613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.662912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.662943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.663252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.663284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.663489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.663520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.663783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.663814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.664104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.664145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.664375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.664395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.664543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.880 [2024-11-20 09:10:13.664564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.880 qpair failed and we were unable to recover it. 00:25:57.880 [2024-11-20 09:10:13.664829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.664850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.665024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.665045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.665141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.665161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.665404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.665424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.665685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.665705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.665967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.665989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.666215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.666236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.666390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.666411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.666526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.666545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.666791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.666812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.667044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.667065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.667253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.667274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.667463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.667495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.667685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.667715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.667977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.668010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.668281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.668301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.668567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.668587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.668745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.668765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.668926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.668946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.669198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.669220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.669325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.669345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.669581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.669601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.669830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.669862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.670072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.670105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.670304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.670336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.670587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.670618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.670880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.670912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.671227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.671266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.671508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.671540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.671743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.671775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.672011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.672043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.672315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.672347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.672542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.672563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.672792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.672812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.673026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.673048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.673141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.673161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.673259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.673279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.673560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.673591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.881 qpair failed and we were unable to recover it. 00:25:57.881 [2024-11-20 09:10:13.673825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.881 [2024-11-20 09:10:13.673856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.674034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.674067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.674272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.674292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.674457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.674489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.674774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.674805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.675089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.675128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.675346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.675366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.675553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.675573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.675693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.675713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.675933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.675978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.676159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.676190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.676479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.676510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.676698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.676729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.676909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.676941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.677218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.677239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.677466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.677486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.677728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.677748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.677914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.677934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.678206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.678227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.678492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.678523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.678659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.678690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.678983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.679022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.679141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.679173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.679413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.679444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.679717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.679749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.680023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.680056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.680319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.680351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.680534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.680565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.680852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.680884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.681021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.681042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.681259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.681291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.681467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.681498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.681688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.681719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.681986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.682019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.882 qpair failed and we were unable to recover it. 00:25:57.882 [2024-11-20 09:10:13.682276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.882 [2024-11-20 09:10:13.682296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.682563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.682603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.682846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.682877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.683100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.683134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.683314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.683335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.683501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.683521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.683734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.683754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.683906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.683927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.684179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.684200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.684363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.684394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.684657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.684689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.684983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.685017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.685276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.685297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.685464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.685484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.685703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.685727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.685830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.685849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.686085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.686105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.686343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.686363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.686593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.686614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.686784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.686803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.686960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.686982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.687163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.687184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.687429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.687450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.687573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.687592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.687832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.687853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.688109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.688130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.688277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.688298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.688488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.688508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.688736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.688756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.688865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.688885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.689114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.689135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.689298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.689318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.689489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.689510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.689621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.689641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.689750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.689770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.689989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.690010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.690183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.690207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.690302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.690324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.690500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.883 [2024-11-20 09:10:13.690520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.883 qpair failed and we were unable to recover it. 00:25:57.883 [2024-11-20 09:10:13.690744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.690765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.690926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.690945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.691148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.691178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.691421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.691442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.691606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.691626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.691881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.691901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.692052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.692073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.692291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.692311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.692506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.692526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.692686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.692707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.692923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.692943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.693142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.693163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.693330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.693350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.693515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.693535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.693768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.693788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.694042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.694064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.694295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.694316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.694484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.694504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.694749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.694769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.694958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.694979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.695222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.695244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.695470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.695490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.695609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.695629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.695807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.695828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.695987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.696007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.696178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.696200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.696438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.696458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.696722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.696743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.696936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.696974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.697141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.697161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.697386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.697406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.697644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.697664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.697885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.697905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.698083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.698104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.698344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.698364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.698622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.698643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.698908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.698928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.884 [2024-11-20 09:10:13.699159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.884 [2024-11-20 09:10:13.699180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.884 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.699405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.699425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.699690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.699710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.699879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.699900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.700180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.700202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.700318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.700337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.700585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.700618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.700812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.700844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.701032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.701054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.701317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.701338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.701487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.701508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.701782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.701815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.702131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.702165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.702410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.702442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.702755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.702787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.703042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.703076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.703323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.703355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.703645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.703676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.703863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.703884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.704134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.704156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.704334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.704367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.704566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.704597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.704844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.704876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.705152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.705185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.705460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.705480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.705723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.705744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.705982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.706004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.706125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.706146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.706390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.706422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.706717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.706748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.707018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.707052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.707325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.707345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.707585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.707606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.707772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.707796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.708032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.708052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.708248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.708269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.708370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.708390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.708499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.708518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.708765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.885 [2024-11-20 09:10:13.708785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.885 qpair failed and we were unable to recover it. 00:25:57.885 [2024-11-20 09:10:13.708890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.708910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.709037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.709058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.709303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.709324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.709566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.709587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.709713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.709733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.709997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.710019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.710175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.710196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.710296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.710315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.710568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.710590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.710695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.710716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.710932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.710959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.711125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.711146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.711379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.711399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.711645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.711665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.711911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.711930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.712105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.712126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.712353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.712374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.712594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.712614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.712852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.712871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.713059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.713080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.713246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.713267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.713455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.713479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.713719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.713741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.713979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.713999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.714237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.714258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.714480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.714500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.714718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.714739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.714908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.714929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.715196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.715217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.715443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.715464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.715569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.715590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.715824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.715844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.716072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.716094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.716200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.716219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.886 qpair failed and we were unable to recover it. 00:25:57.886 [2024-11-20 09:10:13.716370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.886 [2024-11-20 09:10:13.716391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.716610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.716630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.716848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.716869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.717122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.717144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.717248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.717268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.717415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.717436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.717595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.717616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.717846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.717866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.718112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.718132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.718300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.718320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.718484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.718506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.718672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.718692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.718843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.718863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.719035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.719057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.719240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.719261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.719511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.719532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.719703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.719723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.719971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.719993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.720182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.720202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.720528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.720560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.720766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.720797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.720979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.721012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.721273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.721305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.721478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.721498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.721740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.721761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.722025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.722058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.722310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.722332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.722505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.722526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.722722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.722741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.722989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.723022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.723226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.723257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.723507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.723548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.723713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.723734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.723975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.723997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.724220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.724241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.724462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.724494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.724644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.724676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.724973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.725007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.725264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.887 [2024-11-20 09:10:13.725285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.887 qpair failed and we were unable to recover it. 00:25:57.887 [2024-11-20 09:10:13.725537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.725568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.725881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.725914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.726130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.726164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.726373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.726407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.726673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.726694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.726825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.726845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.727002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.727023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.727214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.727245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.727438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.727468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.727649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.727669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.727847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.727880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.728172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.728205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.728401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.728433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.728644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.728676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.728956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.728979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.729167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.729188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.729435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.729473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.729770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.729803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.729984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.730017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.730250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.730283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.730481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.730512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.730632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.730664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.730964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.730999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.731216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.731248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.731510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.731542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.731832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.731852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.732052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.732073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.732207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.732228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.732468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.732489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.732734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.732754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.732989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.733011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.733122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.733142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.733369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.733390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.733556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.733576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.733772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.733804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.734089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.734122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.734399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.734442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.734727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.734747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.888 qpair failed and we were unable to recover it. 00:25:57.888 [2024-11-20 09:10:13.734959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.888 [2024-11-20 09:10:13.734981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.735175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.735196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.735304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.735324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.735617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.735649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.735841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.735876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.736111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.736150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.736365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.736386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.736563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.736596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.736706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.736739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.737018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.737053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.737304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.737337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.737590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.737622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.737827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.737859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.738136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.738170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.738368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.738388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.738558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.738580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.738734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.738754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.739008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.739030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.739141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.739178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.739381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.739413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.739637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.739669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.739972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.740005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.740157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.740191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.740396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.740428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.740715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.740748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.740894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.740929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.741239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.741262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.741491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.741512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.741735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.741756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.742213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.742239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.742508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.742549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.742748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.742780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.742978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.743019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.743212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.743243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.743425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.743457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.743730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.743762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.743894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.743926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.744101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.744133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.889 [2024-11-20 09:10:13.744285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.889 [2024-11-20 09:10:13.744316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.889 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.744513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.744545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.744679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.744711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.744903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.744934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.745192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.745227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.745418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.745448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.745644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.745676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.745874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.745905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.746158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.746190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.746324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.746355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.746467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.746487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.746755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.746777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.746906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.746925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.747179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.747202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.747431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.747453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.747572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.747593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.747784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.747820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.748094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.748129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.748282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.748315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.748514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.748534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.748832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.748853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.749027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.749048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.749245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.749267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.749504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.749536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.749837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.749869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.750051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.750085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.750330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.750362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.750558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.750589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.750733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.750766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.751022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.751056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.751283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.751315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.751527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.751558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.751801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.751833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.752028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.752062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.752215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.752236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.752499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.752521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.752622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.752643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.752895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.752927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.753149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.890 [2024-11-20 09:10:13.753183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.890 qpair failed and we were unable to recover it. 00:25:57.890 [2024-11-20 09:10:13.753443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.753466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.753635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.753656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.753888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.753909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.754153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.754175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.754360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.754392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.754576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.754607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.754791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.754822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.755035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.755069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.755285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.755318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.755611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.755632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.755885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.755906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.756134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.756155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.756335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.756356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.756586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.756619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.756893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.756924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.757209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.757241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.757454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.757486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.757769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.757800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.758134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.758170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.758412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.758433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.758699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.758720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.758901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.758922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.759130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.759152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.759253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.759277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.759415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.759435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.759696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.759729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.759920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.759960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.760170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.760208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.760396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.760417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.760606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.760638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.760833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.760865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.761120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.761153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.761434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.761465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.761646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.761667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.891 qpair failed and we were unable to recover it. 00:25:57.891 [2024-11-20 09:10:13.761892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.891 [2024-11-20 09:10:13.761913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.762177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.762200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.762478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.762500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.762760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.762782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.763035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.763058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.763284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.763304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.763486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.763508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.763712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.763744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.763932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.763973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.764183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.764215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.764340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.764371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.764632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.764664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.764962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.764995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.765216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.765250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.765504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.765535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.765675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.765707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.765895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.765932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.766149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.766182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.766381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.766412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.766690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.766723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.766904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.766936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.767254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.767287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.767493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.767524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.767737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.767769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.768010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.768044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.768322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.768355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.768580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.768601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.768763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.768784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.768957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.768979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.769108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.769129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.769242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.769264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.769561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.769582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.769749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.769770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.770004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.770037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.770205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.770236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.770504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.770536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.770777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.770797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.770970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.770991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.771179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.892 [2024-11-20 09:10:13.771200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.892 qpair failed and we were unable to recover it. 00:25:57.892 [2024-11-20 09:10:13.771312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.771344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.771602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.771632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.771779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.771811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.772043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.772077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.772279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.772312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.772513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.772545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.772826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.772858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.773094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.773129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.773378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.773411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.773614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.773635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.773792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.773812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.774004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.774036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.774311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.774344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.774542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.774562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.774724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.774744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.774922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.774943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.775136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.775156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.775269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.775289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.775485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.775507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.775797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.775839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.776120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.776154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.776353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.776385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.776683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.776715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.777008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.777041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.777328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.777360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.777579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.777599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.777764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.777785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.777978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.778013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.778263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.778296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.778608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.778640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.778900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.778932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.779178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.779212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.779409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.779430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.779634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.779655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.779897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.779918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.780076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.780110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.780362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.780394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.780659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.780689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.893 qpair failed and we were unable to recover it. 00:25:57.893 [2024-11-20 09:10:13.780989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.893 [2024-11-20 09:10:13.781025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.781279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.781312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.781588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.781619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.781746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.781778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.782086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.782120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.782329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.782361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.782543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.782575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.782870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.782907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.783075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.783108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.783378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.783410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.783639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.783670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.783880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.783911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.784070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.784103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.784377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.784408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.784703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.784734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.785011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.785046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.785238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.785270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.785525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.785546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.785722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.785742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.785984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.786018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.786153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.786186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.786400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.786432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.786684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.786705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.786984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.787006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.787239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.787271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.787414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.787446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.787647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.787677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.787872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.787902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.788136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.788168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.788352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.788384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.788580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.788613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.788868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.788900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.789122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.789155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.789389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.789422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.789749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.789792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.790060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.790094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.790320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.790354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.790507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.790540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.894 [2024-11-20 09:10:13.790733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.894 [2024-11-20 09:10:13.790764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.894 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.790970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.791003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.791277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.791310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.791639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.791671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.791890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.791922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.792184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.792216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.792367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.792399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.792542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.792586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.792833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.792854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.793058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.793080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.793271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.793293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.793416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.793438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.793554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.793575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.793685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.793705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.793929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.793968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.794141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.794161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.794343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.794375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.794628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.794661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.794926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.794971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.795175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.795206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.795397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.795428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.795655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.795686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.795895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.795916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.796100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.796127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.796242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.796262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.796387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.796407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.796541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.796562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.796843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.796864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.797089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.797112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.797320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.797352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.797497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.797528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.797798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.797830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.798021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.798054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.798259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.798291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.798490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.798522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.798840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.798860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.798975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.798997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.799204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.799225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.895 [2024-11-20 09:10:13.799425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.895 [2024-11-20 09:10:13.799446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.895 qpair failed and we were unable to recover it. 00:25:57.896 [2024-11-20 09:10:13.799728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-11-20 09:10:13.799748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-11-20 09:10:13.799901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-11-20 09:10:13.799922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-11-20 09:10:13.800048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-11-20 09:10:13.800069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-11-20 09:10:13.800256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-11-20 09:10:13.800299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-11-20 09:10:13.800505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-11-20 09:10:13.800538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-11-20 09:10:13.800736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-11-20 09:10:13.800768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-11-20 09:10:13.800893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-11-20 09:10:13.800924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-11-20 09:10:13.801143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-11-20 09:10:13.801176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-11-20 09:10:13.801493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-11-20 09:10:13.801514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-11-20 09:10:13.801728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-11-20 09:10:13.801760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-11-20 09:10:13.802020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-11-20 09:10:13.802054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-11-20 09:10:13.802204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-11-20 09:10:13.802236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-11-20 09:10:13.802539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-11-20 09:10:13.802572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-11-20 09:10:13.802835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-11-20 09:10:13.802868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-11-20 09:10:13.803113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-11-20 09:10:13.803146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-11-20 09:10:13.803375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-11-20 09:10:13.803408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-11-20 09:10:13.803650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-11-20 09:10:13.803670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-11-20 09:10:13.803764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-11-20 09:10:13.803785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-11-20 09:10:13.804040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-11-20 09:10:13.804073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-11-20 09:10:13.804224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-11-20 09:10:13.804256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-11-20 09:10:13.804464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-11-20 09:10:13.804497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-11-20 09:10:13.804810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-11-20 09:10:13.804842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-11-20 09:10:13.805100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-11-20 09:10:13.805133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-11-20 09:10:13.805278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-11-20 09:10:13.805310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-11-20 09:10:13.805535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-11-20 09:10:13.805567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-11-20 09:10:13.805772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-11-20 09:10:13.805793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-11-20 09:10:13.805956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-11-20 09:10:13.805977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-11-20 09:10:13.806157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.896 [2024-11-20 09:10:13.806190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.896 qpair failed and we were unable to recover it. 00:25:57.896 [2024-11-20 09:10:13.806513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.806544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.806799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.806831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.807062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.807096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.807316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.807349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.807552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.807583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.807857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.807888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.808053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.808087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.808362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.808395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.808593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.808614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.808845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.808865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.809028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.809049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.809235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.809255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.809443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.809464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.809687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.809719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.809991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.810024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.810177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.810209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.810496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.810529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.810800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.810832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.811024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.811057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.811260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.811293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.811516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.811537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.811798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.811819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.812060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.812093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.812237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.812270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.812466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.812490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.812737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.812757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.813007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.813029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.813197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.813217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.813451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.813483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.813626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.813659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.813844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.813875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.814037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.814070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.814211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.814243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.814431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.814463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.814788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.814809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.815043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.815066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.815192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.815213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.815442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.897 [2024-11-20 09:10:13.815473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.897 qpair failed and we were unable to recover it. 00:25:57.897 [2024-11-20 09:10:13.815770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.815802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.816100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.816134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.816273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.816305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.816558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.816590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.816840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.816860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.817101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.817136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.817337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.817370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.817571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.817602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.817860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.817881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.818070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.818092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.818326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.818358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.818556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.818588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.818810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.818842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.819152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.819191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.819473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.819505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.819738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.819759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.819989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.820011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.820172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.820193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.820323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.820344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.820528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.820549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.820732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.820753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.820967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.820989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.821096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.821117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.821237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.821257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.821429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.821470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.821592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.821624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.821808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.821840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.822029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.822063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.822337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.822367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.822657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.822690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.822921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.822942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.823058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.823079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.823201] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.823222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.823329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.823349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.823579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.823600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.823767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.823788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.824015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.824037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.824195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.824216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.898 [2024-11-20 09:10:13.824391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.898 [2024-11-20 09:10:13.824423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.898 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.824723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.824757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.825029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.825062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.825270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.825303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.825517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.825538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.825809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.825840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.826088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.826120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.826260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.826292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.826441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.826483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.826853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.826885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.827095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.827127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.827344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.827385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.827497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.827518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.827712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.827744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.828030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.828063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.828294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.828327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.828493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.828526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.828803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.828835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.829100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.829134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.829367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.829399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.829727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.829758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.829964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.829998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.830280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.830313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.830574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.830606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.830893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.830915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.831128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.831149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.831269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.831289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.831539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.831559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.831818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.831839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.832016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.832038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.832219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.832240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.832368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.832401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.832591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.832622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.832893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.832927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.833139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.833172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.833327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.833360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.833565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.833597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.833713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.833744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.833996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.899 [2024-11-20 09:10:13.834029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.899 qpair failed and we were unable to recover it. 00:25:57.899 [2024-11-20 09:10:13.834225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.834257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.834455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.834488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.834708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.834740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.834941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.834985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.835186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.835224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.835373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.835404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.835621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.835653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.835866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.835897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.836115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.836148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.836294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.836327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.836530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.836551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.836742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.836763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.837031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.837054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.837254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.837275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.837532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.837564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.837781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.837813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.838007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.838042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.838245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.838277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.838429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.838449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.838608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.838629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.838825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.838857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.839117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.839150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.839352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.839384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.839583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.839614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.839794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.839814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.840077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.840099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.840202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.840223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.840462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.840483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.840588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.840609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.840795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.840828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.841056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.841089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.841233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.841271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.841495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.841527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.841810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.841841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.900 qpair failed and we were unable to recover it. 00:25:57.900 [2024-11-20 09:10:13.842044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.900 [2024-11-20 09:10:13.842077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.842227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.842260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.842468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.842500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.842803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.842834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.843042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.843076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.843354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.843393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.843607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.843628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.843805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.843826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.844123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.844146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.844273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.844294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.844416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.844437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.844570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.844592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.844788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.844820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.845102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.845135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.845344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.845377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.845586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.845618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.845885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.845906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.846008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.846030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.846313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.846347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.846573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.846606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.846880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.846900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.847109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.847131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.847304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.847325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.847429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.847450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.847650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.847689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.847892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.847923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.848138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.848171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.848403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.848435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.848653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.848684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.848935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.848968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.849126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.849147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.849315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.849347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.849626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.849658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.849841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.849873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.850096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.850119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.850301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.850334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.850555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.850587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.850909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.850940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.851207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.901 [2024-11-20 09:10:13.851240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.901 qpair failed and we were unable to recover it. 00:25:57.901 [2024-11-20 09:10:13.851447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.851479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.851708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.851728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.851911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.851931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.852063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.852085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.852349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.852381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.852579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.852611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.852886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.852917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.853211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.853244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.853515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.853547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.853843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.853875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.854137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.854170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.854316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.854349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.854555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.854586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.854727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.854770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.854976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.854999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.855103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.855124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.855256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.855277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.855457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.855478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.855732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.855753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.855938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.855968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.856098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.856119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.856368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.856389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.856569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.856601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.856899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.856931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.857153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.857185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.857454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.857497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.857766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.857791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.857969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.857991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.858101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.858123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.858248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.858268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.858442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.858463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.858743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.858764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.858969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.858990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.859170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.859190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.859351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.859372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.859561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.859593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.859802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.859834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.860017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.860051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.902 qpair failed and we were unable to recover it. 00:25:57.902 [2024-11-20 09:10:13.860327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.902 [2024-11-20 09:10:13.860359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.860506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.860527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.860699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.860732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.860935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.860977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.861112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.861143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.861444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.861487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.861615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.861636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.861887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.861918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.862056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.862112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.862274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.862306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.862564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.862596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.862806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.862826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.863054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.863077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.863232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.863252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.863382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.863403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.863647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.863671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.863768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.863789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.863962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.863984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.864146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.864167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.864343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.864375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.864645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.864678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.864968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.865000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.865275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.865307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.865508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.865540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.865796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.865827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.866059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.866081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.866206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.866227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.866397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.866417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.866600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.866619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.866806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.866828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.867005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.867027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.867196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.867227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.867488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.867520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.867771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.867803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.868066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.868088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.868262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.868283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.868401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.868422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.868656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.868677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.903 [2024-11-20 09:10:13.868876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.903 [2024-11-20 09:10:13.868897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.903 qpair failed and we were unable to recover it. 00:25:57.904 [2024-11-20 09:10:13.869071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-11-20 09:10:13.869092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-11-20 09:10:13.869214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-11-20 09:10:13.869234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-11-20 09:10:13.869465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-11-20 09:10:13.869485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-11-20 09:10:13.869743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-11-20 09:10:13.869768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-11-20 09:10:13.869884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-11-20 09:10:13.869905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-11-20 09:10:13.870097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-11-20 09:10:13.870119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-11-20 09:10:13.870247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-11-20 09:10:13.870268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-11-20 09:10:13.870499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-11-20 09:10:13.870520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:57.904 [2024-11-20 09:10:13.870707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:57.904 [2024-11-20 09:10:13.870728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:57.904 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.870911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.870932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.871118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.871141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.871317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.871339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.871523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.871545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.871800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.871821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.872012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.872035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.872230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.872251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.872373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.872394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.872561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.872582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.872809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.872829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.873074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.873096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.873272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.873293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.873464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.873485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.873667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.873688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.873871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.873892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.874145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.874166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.874298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.874319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.874608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.874630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.874893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.874916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.875176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.875202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.875323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.875345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.875508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.875529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.875648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.875669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.875778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.875798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.875966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.876009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.876213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.876245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.876472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.876504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.876633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.876654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.876909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.876941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.877192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.877225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.877378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.877421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.877740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.877772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.877975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.193 [2024-11-20 09:10:13.878008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.193 qpair failed and we were unable to recover it. 00:25:58.193 [2024-11-20 09:10:13.878237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.878271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.878426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.878458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.878734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.878766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.878959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.878980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.879152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.879185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.879406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.879439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.879689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.879721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.879903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.879924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.880088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.880109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.880301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.880334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.880457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.880489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.880698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.880730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.880942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.880986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.881213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.881245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.881450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.881482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.881700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.881732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.881924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.881967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.882161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.882182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.882269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.882289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.882530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.882551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.882718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.882739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.882943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.883010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.883191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.883222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.883477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.883508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.883826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.883858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.884113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.884148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.884273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.884305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.884504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.884537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.884753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.884786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.885088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.885133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.885338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.885370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.885706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.885738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.885922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.885962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.886175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.886207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.886455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.886488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.886795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.886826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.887107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.887141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.887327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.194 [2024-11-20 09:10:13.887360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.194 qpair failed and we were unable to recover it. 00:25:58.194 [2024-11-20 09:10:13.887571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.887603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.887744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.887775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.887988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.888010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.888186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.888219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.888474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.888507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.888719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.888751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.888961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.888996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.889208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.889241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.889500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.889532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.889718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.889738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.890005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.890038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.890297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.890330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.890524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.890555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.890732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.890753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.890924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.890975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.891185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.891218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.891341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.891373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.891593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.891624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.891870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.891908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.892140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.892174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.892318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.892350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.892556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.892588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.892805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.892837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.893105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.893138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.893367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.893398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.893640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.893661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.893912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.893932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.894119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.894140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.894244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.894264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.894442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.894463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.894716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.894738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.894851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.894873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.895000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.895022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.895199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.895219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.895337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.895370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.895593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.895624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.895849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.895880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.896085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.896117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.195 qpair failed and we were unable to recover it. 00:25:58.195 [2024-11-20 09:10:13.896317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.195 [2024-11-20 09:10:13.896348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.896547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.896580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.896689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.896721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.896934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.896983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.897120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.897140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.897354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.897386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.897606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.897638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.897836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.897880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.898052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.898074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.898280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.898313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.898523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.898555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.898741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.898782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.899014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.899036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.899145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.899166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.899395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.899416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.899522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.899543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.899767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.899788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.899956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.899978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.900161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.900183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.900374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.900406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.900641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.900673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.900865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.900898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.901101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.901124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.901279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.901299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.901554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.901587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.901845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.901877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.902139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.902161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.902266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.902286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.902497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.902519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.902791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.902813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.902987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.903009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.903133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.903153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.903330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.903351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.903584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.903605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.903799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.903820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.904008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.904029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.904222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.904243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.904369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.904389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.196 [2024-11-20 09:10:13.904514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.196 [2024-11-20 09:10:13.904535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.196 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.904665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.197 [2024-11-20 09:10:13.904685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.197 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.904851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.197 [2024-11-20 09:10:13.904871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.197 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.904986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.197 [2024-11-20 09:10:13.905008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.197 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.905131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.197 [2024-11-20 09:10:13.905152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.197 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.905268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.197 [2024-11-20 09:10:13.905288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.197 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.905374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.197 [2024-11-20 09:10:13.905395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.197 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.905556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.197 [2024-11-20 09:10:13.905577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.197 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.905754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.197 [2024-11-20 09:10:13.905775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.197 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.905935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.197 [2024-11-20 09:10:13.905964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.197 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.906079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.197 [2024-11-20 09:10:13.906104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.197 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.906208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.197 [2024-11-20 09:10:13.906228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.197 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.906415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.197 [2024-11-20 09:10:13.906437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.197 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.906554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.197 [2024-11-20 09:10:13.906575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.197 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.906764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.197 [2024-11-20 09:10:13.906785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.197 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.906976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.197 [2024-11-20 09:10:13.907010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.197 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.907145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.197 [2024-11-20 09:10:13.907176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.197 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.907400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.197 [2024-11-20 09:10:13.907432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.197 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.907640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.197 [2024-11-20 09:10:13.907672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.197 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.908001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.197 [2024-11-20 09:10:13.908036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.197 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.908244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.197 [2024-11-20 09:10:13.908277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.197 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.908423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.197 [2024-11-20 09:10:13.908454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.197 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.908731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.197 [2024-11-20 09:10:13.908763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.197 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.909045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.197 [2024-11-20 09:10:13.909078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.197 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.909226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.197 [2024-11-20 09:10:13.909258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.197 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.909460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.197 [2024-11-20 09:10:13.909491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.197 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.909713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.197 [2024-11-20 09:10:13.909733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.197 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.909865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.197 [2024-11-20 09:10:13.909885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.197 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.910045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.197 [2024-11-20 09:10:13.910067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.197 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.910239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.197 [2024-11-20 09:10:13.910260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.197 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.910456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.197 [2024-11-20 09:10:13.910489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.197 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.910716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.197 [2024-11-20 09:10:13.910748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.197 qpair failed and we were unable to recover it. 00:25:58.197 [2024-11-20 09:10:13.910925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.910967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.911087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.911107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.911349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.911381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.911674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.911706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.911941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.911974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.912146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.912171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.912350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.912371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.912508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.912528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.912801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.912833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.913121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.913155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.913309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.913341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.913612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.913644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.913901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.913922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.914163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.914184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.914300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.914321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.914505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.914526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.914732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.914752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.914932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.914960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.915085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.915128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.915338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.915369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.915610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.915651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.915818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.915839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.916055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.916089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.916231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.916263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.916401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.916433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.916632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.916663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.916937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.916979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.917146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.917167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.917399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.917431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.917722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.917754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.918044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.918078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.918237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.918269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.918425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.918464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.918696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.918727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.918903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.918924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.919214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.919248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.919371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.919402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.919547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.919579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.198 qpair failed and we were unable to recover it. 00:25:58.198 [2024-11-20 09:10:13.919832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.198 [2024-11-20 09:10:13.919865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.920128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.920161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.920413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.920446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.920560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.920591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.920869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.920907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.921117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.921138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.921267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.921288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.921445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.921476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.921798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.921831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.922073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.922107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.922314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.922346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.922622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.922653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.922984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.923019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.923276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.923309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.923616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.923648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.923976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.924010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.924237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.924269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.924423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.924454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.924677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.924709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.924904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.924935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.925214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.925246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.925386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.925417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.925563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.925596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.925871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.925892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.926148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.926170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.926354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.926374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.926638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.926669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.926920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.926940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.927077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.927098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.927329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.927361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.927493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.927524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.927663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.927694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.927893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.927913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.928097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.928119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.928287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.928308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.928439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.928459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.928750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.928771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.928887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.928909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.199 [2024-11-20 09:10:13.929181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.199 [2024-11-20 09:10:13.929215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.199 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.929361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.929393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.929666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.929699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.929914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.929960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.930166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.930198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.930380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.930411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.930711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.930743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.931034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.931069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.931207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.931239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.931435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.931466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.931760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.931781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.932086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.932119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.932409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.932442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.932575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.932607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.932857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.932888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.933159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.933181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.933420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.933441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.933540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.933559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.933753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.933786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.933995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.934028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.934252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.934284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.934484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.934515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.934725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.934757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.935013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.935036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.935208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.935233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.935411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.935443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.935580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.935612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.935812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.935843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.935964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.935985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.936174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.936205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.936416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.936448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.936660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.936698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.936906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.936927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.937164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.937185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.937299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.937320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.937556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.937588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.937768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.937798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.938051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.938073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.938258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.200 [2024-11-20 09:10:13.938280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.200 qpair failed and we were unable to recover it. 00:25:58.200 [2024-11-20 09:10:13.938371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.938391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.938630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.938651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.938850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.938881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.939143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.939177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.939368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.939400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.939617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.939650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.939905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.939926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.940039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.940061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.940291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.940314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.940429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.940450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.940620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.940642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.940846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.940879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.941096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.941139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.941272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.941303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.941504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.941536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.941770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.941802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.942076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.942110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.942395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.942416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.942646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.942667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.942839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.942860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.943042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.943075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.943304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.943336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.943459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.943491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.943695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.943727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.943887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.943919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.944069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.944090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.944292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.944313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.944538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.944559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.944855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.944887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.945176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.945209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.945387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.945418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.945659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.945680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.945887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.945908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.946132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.946155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.946351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.946372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.946653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.946673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.946872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.946893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.947123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.947146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.201 qpair failed and we were unable to recover it. 00:25:58.201 [2024-11-20 09:10:13.947388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.201 [2024-11-20 09:10:13.947409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.947520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.947545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.947743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.947776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.948027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.948061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.948312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.948344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.948493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.948526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.948739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.948771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.949044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.949078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.949234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.949265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.949416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.949449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.949694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.949725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.949946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.949989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.950249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.950281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.950583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.950616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.950818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.950852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.951079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.951115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.951315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.951347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.951500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.951532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.951831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.951863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.952052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.952084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.952342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.952373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.952485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.952518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.952733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.952765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.953049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.953071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.953246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.953267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.953448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.953480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.953680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.953711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.953925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.953986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.954133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.954164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.954375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.954406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.954747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.954779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.954988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.955010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.955261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.955293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.955498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.955531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.202 qpair failed and we were unable to recover it. 00:25:58.202 [2024-11-20 09:10:13.955810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.202 [2024-11-20 09:10:13.955843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.956140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.956174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.956446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.956478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.956704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.956737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.957018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.957040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.957194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.957216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.957410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.957431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.957547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.957567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.957831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.957869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.958123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.958156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.958367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.958400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.958671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.958703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.958939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.958996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.959122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.959143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.959340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.959372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.959588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.959620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.959821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.959854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.960171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.960193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.960445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.960466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.960641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.960662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.960868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.960889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.961045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.961067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.961327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.961348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.961454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.961475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.961720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.961741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.961940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.961969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.962227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.962248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.962501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.962522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.962705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.962726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.962997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.963018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.963182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.963203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.963454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.963475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.963597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.963629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.963828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.963859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.964078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.964111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.964306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.964343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.964490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.964522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.964765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.964786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.203 [2024-11-20 09:10:13.965018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.203 [2024-11-20 09:10:13.965041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.203 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.965285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.965316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.965550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.965582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.965844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.965864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.966141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.966174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.966368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.966399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.966699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.966732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.966869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.966900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.967052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.967073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.967326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.967347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.967600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.967621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.967791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.967812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.968071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.968106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.968310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.968342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.968641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.968673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.968940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.968970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.969098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.969118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.969300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.969321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.969467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.969498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.969771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.969803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.970097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.970130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.970350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.970382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.970584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.970616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.970815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.970836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.971027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.971052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.971254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.971285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.971440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.971473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.971720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.971751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.972022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.972043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.972165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.972186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.972345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.972366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.972539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.972561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.972745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.972784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.972919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.972970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.973178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.973210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.973348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.973379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.973677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.973708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.973969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.974003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.974213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.204 [2024-11-20 09:10:13.974234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.204 qpair failed and we were unable to recover it. 00:25:58.204 [2024-11-20 09:10:13.974411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.974443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.974584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.974616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.974867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.974899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.975121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.975143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.975241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.975262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.975427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.975447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.975696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.975729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.975997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.976019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.976147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.976168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.976298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.976318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.976537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.976570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.976780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.976812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.977019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.977057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.977186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.977207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.977379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.977400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.977525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.977546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.977722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.977743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.977938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.977981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.978113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.978145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.978347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.978378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.978650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.978682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.978886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.978907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.979137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.979159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.979389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.979421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.979616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.979648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.979925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.979945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.980065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.980086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.980335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.980367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.980524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.980556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.980831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.980862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.981132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.981154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.981335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.981357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.981478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.981511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.981820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.981852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.982152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.982185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.982343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.982376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.982667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.982698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.982895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.982927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.205 [2024-11-20 09:10:13.983140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.205 [2024-11-20 09:10:13.983174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.205 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.983317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.983348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.983598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.983630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.983846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.983878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.984053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.984086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.984247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.984279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.984484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.984515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.984819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.984852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.984994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.985027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.985240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.985273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.985463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.985494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.985715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.985747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.986021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.986043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.986297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.986319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.986495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.986515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.986764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.986802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.986965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.986987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.987197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.987230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.987434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.987465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.987665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.987697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.987912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.987944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.988142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.988175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.988385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.988417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.988566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.988598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.988797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.988828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.989025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.989046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.989230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.989262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.989467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.989499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.989759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.989790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.989924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.989963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.990149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.990182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.990362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.990383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.990511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.990532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.990704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.990725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.206 [2024-11-20 09:10:13.990876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.206 [2024-11-20 09:10:13.990897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.206 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.991081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.991114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.991268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.991300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.991498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.991530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.991679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.991716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.991960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.991981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.992166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.992187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.992320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.992341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.992449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.992474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.992643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.992664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.992818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.992839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.993125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.993159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.993424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.993456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.993754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.993786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.994103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.994137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.994287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.994320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.994506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.994538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.994671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.994702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.994944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.994989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.995220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.995252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.995461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.995493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.995798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.995829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.996107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.996142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.996290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.996322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.996527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.996558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.996835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.996867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.997049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.997070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.997193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.997214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.997319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.997340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.997457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.997478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.997564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.997586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.997831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.997864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.998121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.998156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.998392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.998424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.998550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.998582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.998896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.998929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.999148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.999180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.999319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.999350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.999514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.207 [2024-11-20 09:10:13.999546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.207 qpair failed and we were unable to recover it. 00:25:58.207 [2024-11-20 09:10:13.999748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:13.999780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.000082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.000105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.000282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.000303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.000435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.000466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.000720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.000753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.000967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.001001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.001205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.001238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.001432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.001464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.001595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.001626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.001907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.001966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.002176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.002198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.002357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.002379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.002510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.002542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.002725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.002757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.002935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.002981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.003113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.003134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.003332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.003353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.003522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.003542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.003798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.003818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.004071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.004093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.004267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.004289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.004539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.004560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.004740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.004771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.005121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.005164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.005379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.005412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.005565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.005598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.005883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.005914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.006091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.006124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.006361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.006382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.006620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.006640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.006812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.006844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.007037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.007071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.007283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.007315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.007509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.007540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.007820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.007857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.008058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.008080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.008214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.008257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.008467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.208 [2024-11-20 09:10:14.008505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.208 qpair failed and we were unable to recover it. 00:25:58.208 [2024-11-20 09:10:14.008798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.008829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.009032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.009065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.009230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.009250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.009430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.009462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.009678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.009709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.009986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.010009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.010187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.010207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.010333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.010364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.010578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.010610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.010823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.010854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.011070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.011092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.011234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.011266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.011518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.011550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.011751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.011783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.012084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.012127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.012341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.012375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.012517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.012548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.012849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.012882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.013095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.013129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.013363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.013383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.013486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.013507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.013723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.013754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.013894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.013927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.014180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.014222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.014450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.014470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.014674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.014695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.014873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.014898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.015066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.015098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.015256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.015288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.015433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.015465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.015723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.015754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.015979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.016014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.016212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.016244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.016390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.016421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.016814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.016847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.016976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.017010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.017222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.017254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.017411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.017443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.209 qpair failed and we were unable to recover it. 00:25:58.209 [2024-11-20 09:10:14.017744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.209 [2024-11-20 09:10:14.017777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.017990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.018025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.018173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.018216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.018392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.018412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.018527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.018549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.018830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.018861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.019117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.019150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.019304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.019325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.019455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.019476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.019763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.019783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.020036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.020071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.020320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.020352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.020512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.020544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.020738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.020769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.021042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.021064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.021192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.021216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.021403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.021436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.021584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.021615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.021807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.021838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.022070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.022092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.022349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.022371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.022608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.022629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.022724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.022745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.022920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.022962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.023131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.023162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.023371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.023402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.023613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.023644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.023847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.023879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.024055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.024088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.024323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.024355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.024604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.024636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.024829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.024861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.025003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.025035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.025158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.025179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.025279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.025299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.025407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.025427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.025604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.025636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.025795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.025827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.210 [2024-11-20 09:10:14.026052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.210 [2024-11-20 09:10:14.026085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.210 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.026296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.026317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.026410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.026432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.026743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.026764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.027049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.027082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.027239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.027271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.027480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.027511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.027827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.027859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.028165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.028188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.028438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.028459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.028701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.028722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.028876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.028897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.029122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.029144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.029418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.029440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.029646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.029667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.029832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.029853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.029977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.029999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.030154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.030175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.030364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.030397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.030602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.030633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.030825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.030856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.031047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.031081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.031313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.031344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.031487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.031518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.031649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.031680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.031876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.031909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.032117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.032149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.032336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.032368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.032514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.032546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.032740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.032771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.032971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.033006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.033302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.033334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.033636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.211 [2024-11-20 09:10:14.033657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.211 qpair failed and we were unable to recover it. 00:25:58.211 [2024-11-20 09:10:14.033903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.033924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.034191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.034266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.034484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.034522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.034812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.034847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.035061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.035096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.035350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.035384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.035641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.035674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.035986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.036021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.036230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.036263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.036525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.036559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.036752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.036784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.036992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.037027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.037172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.037205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.037353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.037386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.037617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.037649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.037933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.037977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.038184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.038217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.038357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.038391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.038651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.038683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.038968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.039003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.039210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.039243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.039391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.039424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.039632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.039665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.039886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.039919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.040209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.040242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.040446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.040486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.040687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.040719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.041002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.041037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.041247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.041279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.041490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.041523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.041726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.041759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.042072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.042107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.042249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.042281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.042403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.042436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.042699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.042732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.042967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.043001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.043198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.043231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.212 [2024-11-20 09:10:14.043480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.212 [2024-11-20 09:10:14.043513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.212 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.043771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.043803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.044066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.044102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.044311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.044345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.044626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.044658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.044814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.044848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.045063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.045098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.045331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.045364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.045583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.045615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.045879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.045913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.046227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.046298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.046578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.046615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.046885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.046919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.047110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.047142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.047425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.047459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.047595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.047629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.047901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.047932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.048096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.048129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.048337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.048370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.048529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.048561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.048854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.048886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.049142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.049176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.049498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.049531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.049809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.049841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.050130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.050164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.050298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.050328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.050479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.050511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.050793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.050824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.051078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.051118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.051374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.051405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.051551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.051585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.051870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.051902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.052055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.052087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.052287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.052319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.052530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.052562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.052829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.052861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.053057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.053090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.213 [2024-11-20 09:10:14.053237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.213 [2024-11-20 09:10:14.053269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.213 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.053429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.053461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.053686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.053721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.053926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.053969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.054115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.054146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.054450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.054482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.054687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.054719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.054974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.055007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.055161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.055192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.055328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.055361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.055556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.055588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.055788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.055819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.056095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.056129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.056336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.056368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.056552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.056584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.056820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.056851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.056996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.057030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.057169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.057202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.057408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.057440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.057586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.057618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.057838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.057871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.058046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.058080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.058289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.058321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.058447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.058478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.058771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.058805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.059025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.059058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.059217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.059249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.059400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.059433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.059666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.059699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.059898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.059931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.060125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.060161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.060311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.060348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.060483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.060517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.060650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.060682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.060795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.060827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.061061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.061096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.061357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.061388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.061599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.061631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.214 qpair failed and we were unable to recover it. 00:25:58.214 [2024-11-20 09:10:14.061826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.214 [2024-11-20 09:10:14.061858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.062041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.062074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.062242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.062274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.062497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.062530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.062726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.062758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.062935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.062978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.063196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.063230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.063453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.063486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.063683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.063715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.063910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.063942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.064149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.064181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.064463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.064494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.064696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.064729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.065004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.065043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.065194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.065226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.065457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.065489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.065639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.065670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.065991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.066024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.066236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.066268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.066482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.066513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.066780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.066811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.066991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.067023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.067275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.067306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.067446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.067478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.067676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.067707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.067907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.067939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.068089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.068123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.068384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.068415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.068627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.068658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.068787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.068819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.069079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.069113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.069329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.069363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.069499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.069534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.069734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.069766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.069914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.069946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.070203] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.070235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.070444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.070477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.070771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.070803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.215 qpair failed and we were unable to recover it. 00:25:58.215 [2024-11-20 09:10:14.071060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.215 [2024-11-20 09:10:14.071094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.071333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.071366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.071566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.071597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.073284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.073347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.073588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.073626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.073917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.073964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.074258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.074289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.074518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.074551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.074774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.074806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.075063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.075099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.075354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.075389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.075675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.075708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.075903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.075935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.076154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.076185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.076391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.076424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.076681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.076714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.076970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.077007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.077220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.077253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.077530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.077563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.077801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.077834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.078037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.078072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.078288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.078321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.078522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.078561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.078830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.078863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.079072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.079106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.079462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.079494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.079741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.079774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.080050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.080085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.080237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.080269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.080478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.080511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.080835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.080868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.081072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.081105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.081308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.081342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.216 [2024-11-20 09:10:14.081620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.216 [2024-11-20 09:10:14.081652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.216 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.081777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.081810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.082057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.082090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.082297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.082330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.082493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.082524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.082724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.082756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.082957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.082991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.083133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.083166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.083385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.083418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.083562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.083594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.083732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.083764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.084046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.084082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.084292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.084323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.084438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.084470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.084722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.084754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.084983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.085019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.085179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.085212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.085465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.085498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.085819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.085850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.086077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.086111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.086271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.086303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.086454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.086486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.086623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.086653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.086920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.086961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.087174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.087208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.087503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.087534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.087745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.087778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.088009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.088044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.088197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.088228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.088425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.088475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.088704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.088736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.089034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.089069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.089224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.089255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.089481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.089513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.089802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.089834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.090147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.090181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.090386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.090418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.090569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.090601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.217 [2024-11-20 09:10:14.090798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.217 [2024-11-20 09:10:14.090829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.217 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.091029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.091063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.091212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.091243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.091447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.091480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.091789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.091821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.092057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.092091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.092307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.092339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.092520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.092551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.092738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.092770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.093027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.093060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.093295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.093327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.093471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.093502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.093724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.093757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.094001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.094035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.094190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.094223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.094413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.094445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.094735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.094767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.094973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.095006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.095228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.095260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.095417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.095449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.095569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.095600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.095822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.095854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.096085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.096120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.096341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.096372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.096503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.096535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.096672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.096705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.096910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.096941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.097157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.097189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.097395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.097428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.097580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.097612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.097807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.097839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.098059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.098097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.098369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.098401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.098720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.098752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.098962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.098996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.099180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.099212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.099435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.099468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.218 [2024-11-20 09:10:14.099685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.218 [2024-11-20 09:10:14.099717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.218 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.099942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.099983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.100144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.100176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.100429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.100460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.100681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.100713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.100914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.100959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.101166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.101198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.101349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.101386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.101618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.101650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.101985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.102018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.102177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.102210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.102415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.102447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.102778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.102811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.103100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.103133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.103333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.103365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.103569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.103602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.103744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.103776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.103972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.104005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.104188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.104220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.104371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.104403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.104541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.104573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.104767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.104799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.105017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.105052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.105248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.105280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.105432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.105465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.105695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.105726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.106020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.106054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.106249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.106281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.106417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.106449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.106660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.106692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.106893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.106924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.107062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.107095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.107301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.107333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.107621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.107652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.107932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.107980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.108198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.108230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.108369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.108400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.108674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.108705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.219 qpair failed and we were unable to recover it. 00:25:58.219 [2024-11-20 09:10:14.108986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.219 [2024-11-20 09:10:14.109020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.109321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.109354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.109503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.109534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.109681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.109714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.109901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.109932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.110162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.110195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.110419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.110452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.110668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.110701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.110834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.110866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.111064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.111099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.111318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.111349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.111558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.111590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.111803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.111835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.112020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.112053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.112194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.112225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.112436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.112468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.112793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.112825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.113028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.113063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.113269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.113301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.113514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.113545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.113886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.113917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.114140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.114173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.114305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.114336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.114503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.114534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.114733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.114765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.115046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.115079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.115217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.115250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.115397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.115429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.115570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.115601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.115784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.115816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.116090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.116124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.116334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.116365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.116625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.116657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.116786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.220 [2024-11-20 09:10:14.116818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.220 qpair failed and we were unable to recover it. 00:25:58.220 [2024-11-20 09:10:14.117096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.117131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.117277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.117308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.117444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.117482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.117616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.117647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.117845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.117877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.118083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.118116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.118323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.118355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.118486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.118517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.118758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.118790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.119048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.119082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.119298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.119330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.119487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.119519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.119785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.119816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.120012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.120045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.120177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.120208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.120429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.120461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.120758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.120791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.120942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.120982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.121188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.121219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.121425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.121457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.121741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.121774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.122036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.122069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.122349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.122381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.122686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.122719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.123026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.123059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.123290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.123322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.123615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.123648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.123791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.123823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.124021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.124055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.124327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.124360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.124504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.124536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.124716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.124748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.125046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.125080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.125282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.125314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.125507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.125540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.125730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.125761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.125966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.125999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.126213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.221 [2024-11-20 09:10:14.126245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.221 qpair failed and we were unable to recover it. 00:25:58.221 [2024-11-20 09:10:14.126499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.126530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.126779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.126812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.127009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.127041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.127242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.127275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.127529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.127566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.127815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.127845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.128105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.128139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.128352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.128383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.128587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.128619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.128897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.128929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.129198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.129231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.129443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.129473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.129775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.129806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.130104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.130137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.130388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.130420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.130670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.130702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.130977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.131010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.131136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.131168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.131446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.131478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.131804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.131835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.132080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.132114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.132392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.132422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.132562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.132593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.132904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.132935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.133145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.133178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.133319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.133351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.133542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.133573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.133859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.133890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.134185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.134217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.134436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.134468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.134592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.134624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.134853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.134885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.135164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.135197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.135343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.135374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.135564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.135596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.135794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.135824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.136089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.136123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.222 [2024-11-20 09:10:14.136375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.222 [2024-11-20 09:10:14.136407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.222 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.136676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.136707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.136998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.137031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.137258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.137289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.137570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.137601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.137801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.137832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.138089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.138122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.138401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.138442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.138728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.138762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.138943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.138984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.139217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.139249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.139478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.139509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.139799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.139830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.140098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.140132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.140340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.140371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.140531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.140564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.140792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.140823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.141021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.141053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.141188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.141220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.141426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.141457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.141750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.141781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.141907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.141938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.142104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.142136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.142270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.142302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.142510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.142542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.142742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.142775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.143046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.143078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.143282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.143314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.143515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.143547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.143831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.143862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.144164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.144198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.144462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.144495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.144799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.144830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.145063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.145096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.145330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.145363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.145595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.145626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.145927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.145989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.146208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.146241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.223 [2024-11-20 09:10:14.146456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.223 [2024-11-20 09:10:14.146487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.223 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.146708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.146740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.146876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.146907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.147143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.147176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.147313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.147344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.147573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.147605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.147721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.147751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.148028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.148062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.148274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.148307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.148501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.148539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.148814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.148846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.148993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.149026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.149251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.149282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.149416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.149448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.149719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.149749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.150026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.150059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.150266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.150299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.150522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.150554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.150663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.150694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.150885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.150917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.151156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.151188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.151407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.151439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.151763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.151794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.152081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.152115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.152358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.152389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.152593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.152624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.152877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.152907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.153225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.153260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.153517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.153548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.153755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.153786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.154094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.154127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.154384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.154417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.154696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.154729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.155017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.155051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.155211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.155242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.155385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.155417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.155550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.155581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.155715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.155746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.224 [2024-11-20 09:10:14.155981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.224 [2024-11-20 09:10:14.156017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.224 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.156164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.156196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.156354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.156385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.156569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.156602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.156840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.156873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.157001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.157035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.157252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.157283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.157410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.157441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.157713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.157745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.157999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.158032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.158179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.158211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.158403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.158441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.158652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.158683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.158881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.158912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.159078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.159109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.159257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.159288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.159433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.159464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.159662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.159693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.159999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.160031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.160235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.160267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.160465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.160497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.160757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.160788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.161083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.161116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.161342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.161373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.161573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.161605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.161803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.161834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.162086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.162118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.162325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.162356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.162535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.162565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.162758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.162789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.163089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.163122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.163281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.163312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.163493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.163523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.163803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.225 [2024-11-20 09:10:14.163835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.225 qpair failed and we were unable to recover it. 00:25:58.225 [2024-11-20 09:10:14.164057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.164089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.164290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.164322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.164528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.164560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.164841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.164873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.165113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.165146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.165417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.165449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.165740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.165772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.166053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.166087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.166373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.166404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.166657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.166689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.166888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.166920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.167123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.167155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.167421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.167453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.167674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.167705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.167902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.167933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.168164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.168196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.168453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.168486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.168700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.168737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.169002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.169034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.169227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.169258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.169465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.169496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.169777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.169808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.169989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.170024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.170206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.170237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.170517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.170549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.170826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.170856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.171133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.171167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.171375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.171405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.171709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.171741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.171945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.171985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.172242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.172275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.172591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.172623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.172809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.172840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.173101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.173133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.173391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.173423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.173641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.173673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.173933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.173991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.226 [2024-11-20 09:10:14.174193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.226 [2024-11-20 09:10:14.174224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.226 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.174430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.174462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.174736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.174767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.174970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.175002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.175253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.175286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.175485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.175517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.175765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.175797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.176097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.176131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.176374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.176406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.176702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.176733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.177009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.177043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.177250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.177282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.177487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.177518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.177794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.177824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.178103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.178137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.178386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.178418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.178622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.178653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.178906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.178938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.179091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.179122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.179343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.179375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.179625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.179661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.179921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.179961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.180243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.180275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.180479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.180511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.180814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.180845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.181039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.181073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.181351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.181382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.181581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.181613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.181887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.181918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.182186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.182219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.182338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.182369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.182648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.182679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.182798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.182829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.183117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.183152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.183308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.183339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.183535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.183566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.183776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.183807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.183991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.184025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.227 qpair failed and we were unable to recover it. 00:25:58.227 [2024-11-20 09:10:14.184340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.227 [2024-11-20 09:10:14.184371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.184605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.184637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.184892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.184924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.185211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.185243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.185447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.185478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.185780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.185812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.186082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.186116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.186392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.186423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.186734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.186767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.187077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.187111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.187251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.187283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.187581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.187614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.187907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.187939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.188241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.188273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.188540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.188571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.188844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.188875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.189089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.189122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.189407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.189439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.189721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.189752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.190086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.190120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.190374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.190406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.190662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.190694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.190887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.190929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.191140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.191172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.191424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.191456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.191757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.191789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.192005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.192039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.192314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.192346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.192472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.192505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.192719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.192750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.193026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.193060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.193295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.193327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2479127 Killed "${NVMF_APP[@]}" "$@" 00:25:58.228 [2024-11-20 09:10:14.193611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.193643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.193928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.193972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:25:58.228 [2024-11-20 09:10:14.194241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.194275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 [2024-11-20 09:10:14.194395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.194427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.228 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:58.228 [2024-11-20 09:10:14.194678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.228 [2024-11-20 09:10:14.194711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.228 qpair failed and we were unable to recover it. 00:25:58.229 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:58.229 [2024-11-20 09:10:14.194992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.229 [2024-11-20 09:10:14.195027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.229 qpair failed and we were unable to recover it. 00:25:58.229 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:58.229 [2024-11-20 09:10:14.195222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.229 [2024-11-20 09:10:14.195254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.229 qpair failed and we were unable to recover it. 00:25:58.229 [2024-11-20 09:10:14.195457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.229 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:58.229 [2024-11-20 09:10:14.195490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.229 qpair failed and we were unable to recover it. 00:25:58.229 [2024-11-20 09:10:14.195781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.229 [2024-11-20 09:10:14.195813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.229 qpair failed and we were unable to recover it. 00:25:58.229 [2024-11-20 09:10:14.196018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.229 [2024-11-20 09:10:14.196051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.229 qpair failed and we were unable to recover it. 00:25:58.229 [2024-11-20 09:10:14.196293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.229 [2024-11-20 09:10:14.196324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.229 qpair failed and we were unable to recover it. 00:25:58.229 [2024-11-20 09:10:14.196628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.229 [2024-11-20 09:10:14.196661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.229 qpair failed and we were unable to recover it. 00:25:58.229 [2024-11-20 09:10:14.196965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.229 [2024-11-20 09:10:14.196999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.229 qpair failed and we were unable to recover it. 00:25:58.229 [2024-11-20 09:10:14.197306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.229 [2024-11-20 09:10:14.197339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.229 qpair failed and we were unable to recover it. 00:25:58.229 [2024-11-20 09:10:14.197664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.229 [2024-11-20 09:10:14.197715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.229 qpair failed and we were unable to recover it. 00:25:58.229 [2024-11-20 09:10:14.197876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.229 [2024-11-20 09:10:14.197909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.229 qpair failed and we were unable to recover it. 00:25:58.229 [2024-11-20 09:10:14.198104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.229 [2024-11-20 09:10:14.198138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.229 qpair failed and we were unable to recover it. 00:25:58.229 [2024-11-20 09:10:14.198351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.229 [2024-11-20 09:10:14.198394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.229 qpair failed and we were unable to recover it. 00:25:58.229 [2024-11-20 09:10:14.198641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.229 [2024-11-20 09:10:14.198676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.229 qpair failed and we were unable to recover it. 00:25:58.229 [2024-11-20 09:10:14.198833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.229 [2024-11-20 09:10:14.198866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.229 qpair failed and we were unable to recover it. 00:25:58.229 [2024-11-20 09:10:14.199148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.229 [2024-11-20 09:10:14.199183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.229 qpair failed and we were unable to recover it. 00:25:58.229 [2024-11-20 09:10:14.199410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.229 [2024-11-20 09:10:14.199442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.229 qpair failed and we were unable to recover it. 00:25:58.229 [2024-11-20 09:10:14.199641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.229 [2024-11-20 09:10:14.199674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.229 qpair failed and we were unable to recover it. 00:25:58.229 [2024-11-20 09:10:14.199972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.229 [2024-11-20 09:10:14.200024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.229 qpair failed and we were unable to recover it. 00:25:58.229 [2024-11-20 09:10:14.200235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.229 [2024-11-20 09:10:14.200268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.229 qpair failed and we were unable to recover it. 00:25:58.229 [2024-11-20 09:10:14.200548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.229 [2024-11-20 09:10:14.200580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.229 qpair failed and we were unable to recover it. 00:25:58.229 [2024-11-20 09:10:14.200841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.229 [2024-11-20 09:10:14.200880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.229 qpair failed and we were unable to recover it. 00:25:58.229 [2024-11-20 09:10:14.201182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.229 [2024-11-20 09:10:14.201217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.229 qpair failed and we were unable to recover it. 00:25:58.229 [2024-11-20 09:10:14.201431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.229 [2024-11-20 09:10:14.201465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.229 qpair failed and we were unable to recover it. 00:25:58.229 [2024-11-20 09:10:14.201684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.229 [2024-11-20 09:10:14.201717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.229 qpair failed and we were unable to recover it. 00:25:58.229 [2024-11-20 09:10:14.201910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.229 [2024-11-20 09:10:14.201962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.229 qpair failed and we were unable to recover it. 00:25:58.229 [2024-11-20 09:10:14.202195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.229 [2024-11-20 09:10:14.202244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.229 qpair failed and we were unable to recover it. 00:25:58.229 [2024-11-20 09:10:14.202577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.229 [2024-11-20 09:10:14.202610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.229 qpair failed and we were unable to recover it. 00:25:58.510 [2024-11-20 09:10:14.202893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.510 [2024-11-20 09:10:14.202929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.510 qpair failed and we were unable to recover it. 00:25:58.510 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@328 -- # nvmfpid=2480064 00:25:58.510 [2024-11-20 09:10:14.203240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.510 [2024-11-20 09:10:14.203276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.510 qpair failed and we were unable to recover it. 00:25:58.510 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@329 -- # waitforlisten 2480064 00:25:58.510 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:58.510 [2024-11-20 09:10:14.203527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.510 [2024-11-20 09:10:14.203562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.510 qpair failed and we were unable to recover it. 00:25:58.510 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 2480064 ']' 00:25:58.510 [2024-11-20 09:10:14.203816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.510 [2024-11-20 09:10:14.203851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.510 qpair failed and we were unable to recover it. 00:25:58.510 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:58.510 [2024-11-20 09:10:14.204106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.510 [2024-11-20 09:10:14.204161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.510 qpair failed and we were unable to recover it. 00:25:58.510 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:58.510 [2024-11-20 09:10:14.204446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.510 [2024-11-20 09:10:14.204507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.510 qpair failed and we were unable to recover it. 00:25:58.510 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:58.510 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:58.510 [2024-11-20 09:10:14.204788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.510 [2024-11-20 09:10:14.204842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.510 qpair failed and we were unable to recover it. 00:25:58.510 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:58.510 [2024-11-20 09:10:14.205156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.510 [2024-11-20 09:10:14.205211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.510 qpair failed and we were unable to recover it. 00:25:58.510 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:58.510 [2024-11-20 09:10:14.205505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.510 [2024-11-20 09:10:14.205556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.510 qpair failed and we were unable to recover it. 00:25:58.510 [2024-11-20 09:10:14.205884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.510 [2024-11-20 09:10:14.205932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.510 qpair failed and we were unable to recover it. 00:25:58.510 [2024-11-20 09:10:14.206184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.510 [2024-11-20 09:10:14.206233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.510 qpair failed and we were unable to recover it. 00:25:58.510 [2024-11-20 09:10:14.206516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.510 [2024-11-20 09:10:14.206566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.510 qpair failed and we were unable to recover it. 00:25:58.510 [2024-11-20 09:10:14.206801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.510 [2024-11-20 09:10:14.206848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.510 qpair failed and we were unable to recover it. 00:25:58.510 [2024-11-20 09:10:14.207168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.510 [2024-11-20 09:10:14.207225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.510 qpair failed and we were unable to recover it. 00:25:58.510 [2024-11-20 09:10:14.207450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.510 [2024-11-20 09:10:14.207495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.510 qpair failed and we were unable to recover it. 00:25:58.510 [2024-11-20 09:10:14.207729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.510 [2024-11-20 09:10:14.207773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.510 qpair failed and we were unable to recover it. 00:25:58.510 [2024-11-20 09:10:14.208005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.510 [2024-11-20 09:10:14.208055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.510 qpair failed and we were unable to recover it. 00:25:58.510 [2024-11-20 09:10:14.208363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.510 [2024-11-20 09:10:14.208405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.510 qpair failed and we were unable to recover it. 00:25:58.510 [2024-11-20 09:10:14.208732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.510 [2024-11-20 09:10:14.208766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.510 qpair failed and we were unable to recover it. 00:25:58.510 [2024-11-20 09:10:14.208915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.510 [2024-11-20 09:10:14.208964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.510 qpair failed and we were unable to recover it. 00:25:58.510 [2024-11-20 09:10:14.209178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.510 [2024-11-20 09:10:14.209212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.510 qpair failed and we were unable to recover it. 00:25:58.510 [2024-11-20 09:10:14.209440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.510 [2024-11-20 09:10:14.209474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.510 qpair failed and we were unable to recover it. 00:25:58.510 [2024-11-20 09:10:14.209758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.510 [2024-11-20 09:10:14.209791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.510 qpair failed and we were unable to recover it. 00:25:58.510 [2024-11-20 09:10:14.209933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.510 [2024-11-20 09:10:14.209980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.510 qpair failed and we were unable to recover it. 00:25:58.510 [2024-11-20 09:10:14.210274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.510 [2024-11-20 09:10:14.210310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.510 qpair failed and we were unable to recover it. 00:25:58.510 [2024-11-20 09:10:14.210458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.210492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.210791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.210825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.211025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.211059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.211316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.211350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.211622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.211654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.211865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.211900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.212205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.212240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.212462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.212495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.212610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.212644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.212855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.212889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.213184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.213217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.213367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.213401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.213636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.213668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.213920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.213969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.214187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.214220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.214382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.214414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.214634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.214667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.214887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.214919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.215063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.215100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.215296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.215329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.215635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.215667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.215912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.215944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.216251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.216283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.216582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.216614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.216887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.216921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.217213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.217246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.217442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.217476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.217678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.217710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.218007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.218043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.218299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.218332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.218643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.218675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.218977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.219011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.219159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.219191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.219440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.219472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.219660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.219692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.219921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.219965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.220165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.220197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.220497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.511 [2024-11-20 09:10:14.220528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.511 qpair failed and we were unable to recover it. 00:25:58.511 [2024-11-20 09:10:14.220814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.220847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.221137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.221170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.221391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.221424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.221615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.221647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.221898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.221937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.222175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.222210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.222469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.222502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.222767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.222800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.222985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.223020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.223248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.223281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.223429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.223463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.223594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.223627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.223820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.223853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.224058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.224094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.224374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.224407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.224610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.224644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.224831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.224863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.225121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.225158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.225341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.225373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.225686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.225721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.225971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.226012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.226212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.226244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.226425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.226459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.226587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.226618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.226870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.226903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.227247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.227282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.227465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.227498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.227752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.227785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.227987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.228023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.228266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.228299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.228501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.228533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.228788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.228821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.229124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.229159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.229455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.229488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.229697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.229731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.229965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.229999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.230278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.230312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.512 [2024-11-20 09:10:14.230589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.512 [2024-11-20 09:10:14.230622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.512 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.230838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.230871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.231074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.231109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.231310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.231343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.231648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.231680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.231827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.231860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.232113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.232147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.232330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.232364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.232591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.232625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.232842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.232875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.233148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.233188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.233413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.233447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.233581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.233614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.233896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.233928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.234133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.234166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.234448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.234481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.234681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.234713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.234927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.234968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.235173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.235205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.235343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.235376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.235632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.235665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.235927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.235972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.236175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.236208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.236318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.236350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.236559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.236592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.236776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.236810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.237081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.237115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.237302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.237336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.237451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.237483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.237665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.237698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.237959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.237994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.238279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.238314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.238471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.238513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.238763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.238795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.239009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.239044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.239265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.239299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.239511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.239545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.239769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.239802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.513 qpair failed and we were unable to recover it. 00:25:58.513 [2024-11-20 09:10:14.239986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.513 [2024-11-20 09:10:14.240021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.240144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.514 [2024-11-20 09:10:14.240178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.240361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.514 [2024-11-20 09:10:14.240393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.240654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.514 [2024-11-20 09:10:14.240687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.240796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.514 [2024-11-20 09:10:14.240828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.240973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.514 [2024-11-20 09:10:14.241006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.241154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.514 [2024-11-20 09:10:14.241187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.241377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.514 [2024-11-20 09:10:14.241410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.241592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.514 [2024-11-20 09:10:14.241624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.241825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.514 [2024-11-20 09:10:14.241857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.241984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.514 [2024-11-20 09:10:14.242018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.242218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.514 [2024-11-20 09:10:14.242251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.242393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.514 [2024-11-20 09:10:14.242436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.242624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.514 [2024-11-20 09:10:14.242657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.242859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.514 [2024-11-20 09:10:14.242892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.243115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.514 [2024-11-20 09:10:14.243149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.243345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.514 [2024-11-20 09:10:14.243378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.243637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.514 [2024-11-20 09:10:14.243669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.243828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.514 [2024-11-20 09:10:14.243861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.244062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.514 [2024-11-20 09:10:14.244096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.244232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.514 [2024-11-20 09:10:14.244264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.244393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.514 [2024-11-20 09:10:14.244424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.244649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.514 [2024-11-20 09:10:14.244681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.244873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.514 [2024-11-20 09:10:14.244905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.245188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.514 [2024-11-20 09:10:14.245223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.245359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.514 [2024-11-20 09:10:14.245390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.245533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.514 [2024-11-20 09:10:14.245564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.245775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.514 [2024-11-20 09:10:14.245808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.246059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.514 [2024-11-20 09:10:14.246093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.246277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.514 [2024-11-20 09:10:14.246310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.246502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.514 [2024-11-20 09:10:14.246533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.246754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.514 [2024-11-20 09:10:14.246786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.514 qpair failed and we were unable to recover it. 00:25:58.514 [2024-11-20 09:10:14.246987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.247020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.247159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.247192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.247388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.247420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.247610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.247642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.247863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.247895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.248128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.248162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.248374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.248406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.248607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.248640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.248853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.248885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.249079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.249114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.249315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.249347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.249557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.249589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.249852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.249884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.250099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.250133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.250337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.250369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.250579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.250612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.250865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.250898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.251183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.251217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.251337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.251370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.251501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.251533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.251662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.251701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.251977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.252013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.252126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.252159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.252431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.252462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.252712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.252744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.252945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.252986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.253184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.253216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.253491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.253523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.253720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.253752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.253937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.253978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.254188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.254220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.254421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.254453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.254733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.254765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.254960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.254994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.255197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.255230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.255516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.255549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.255818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.255851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.256099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.515 [2024-11-20 09:10:14.256134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.515 qpair failed and we were unable to recover it. 00:25:58.515 [2024-11-20 09:10:14.256390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.256423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.256603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.256636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.256836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.256868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.257118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.257154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.257364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.257398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.257413] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:25:58.516 [2024-11-20 09:10:14.257475] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:58.516 [2024-11-20 09:10:14.257605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.257641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.257914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.257944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.258204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.258237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.258519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.258553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.258754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.258787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.258926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.258978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.259174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.259208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.259426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.259459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.259594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.259628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.259903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.259936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.260101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.260136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.260392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.260425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.260709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.260746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.260882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.260915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.261061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.261095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.261283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.261315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.261538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.261571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.261827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.261859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.261996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.262031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.262217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.262250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.262383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.262415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.262596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.262628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.262877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.262909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.263115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.263150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.263268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.263300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.263407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.263438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.263711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.263744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.263978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.264012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.264211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.264245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.264493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.264532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.264719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.264751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.516 [2024-11-20 09:10:14.264937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.516 [2024-11-20 09:10:14.264982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.516 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.265217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.265251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.265445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.265477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.265764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.265797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.266006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.266041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.266234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.266267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.266462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.266495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.266679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.266712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.266917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.266957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.267205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.267239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.267364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.267396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.267669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.267702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.267826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.267859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.268127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.268162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.268365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.268396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.268509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.268542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.268675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.268709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.268922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.268972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.269151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.269183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.269378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.269409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.269599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.269632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.269774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.269830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.270054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.270089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.270218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.270250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.270459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.270496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.270724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.270762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.270943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.271004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.271218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.271251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.271459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.271509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.271716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.271749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.271927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.271973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.272162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.272196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.272479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.272512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.272627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.272659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.272835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.272867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.273063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.273098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.273286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.273319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.273508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.273540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.517 qpair failed and we were unable to recover it. 00:25:58.517 [2024-11-20 09:10:14.273668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.517 [2024-11-20 09:10:14.273707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.273904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.273937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.274140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.274173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.274352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.274384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.274515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.274547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.274684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.274716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.274971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.275006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.275192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.275226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.275503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.275536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.275668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.275700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.275881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.275914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.276042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.276076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.276253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.276286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.276499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.276531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.276670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.276704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.276893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.276928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.277202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.277237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.277359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.277393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.277547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.277580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.277826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.277858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.278059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.278093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.278221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.278255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.278442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.278474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.278655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.278687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.278897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.278928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.279217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.279251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.279431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.279464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.279718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.279750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.279935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.279981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.280126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.280159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.280285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.280316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.280559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.280590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.280835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.280868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.280986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.281020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.281269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.281302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.281495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.281528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.281769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.281801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.281933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.281975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.518 qpair failed and we were unable to recover it. 00:25:58.518 [2024-11-20 09:10:14.282227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.518 [2024-11-20 09:10:14.282260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.282454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.282485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.282613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.282652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.282853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.282885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.283101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.283135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.283257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.283288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.283533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.283565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.283751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.283782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.283992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.284025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.284157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.284190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.284388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.284420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.284694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.284726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.284996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.285030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.285210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.285242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.285432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.285463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.285665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.285698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.285815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.285847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.286022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.286056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.286174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.286206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.286405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.286437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.286689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.286722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.286842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.286873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.287067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.287100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.287220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.287252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.287467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.287498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.287765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.287797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.287926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.287970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.288147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.288179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.288373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.288405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.288536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.288569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.288741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.288772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.289038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.289073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.289287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.289318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.519 [2024-11-20 09:10:14.289511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.519 [2024-11-20 09:10:14.289543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.519 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.289730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.289762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.289935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.289978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.290226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.290256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.290374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.290406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.290535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.290566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.290783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.290814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.291035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.291069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.291218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.291249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.291424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.291462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.291651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.291684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.291874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.291906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.292094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.292128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.292264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.292295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.292429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.292461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.292708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.292739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.292986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.293019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.293145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.293176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.293445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.293476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.293595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.293627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.293802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.293835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.294008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.294040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.294225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.294257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.294388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.294420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.294630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.294662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.294799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.294830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.295048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.295082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.295333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.295365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.295500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.295534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.295653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.295683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.295810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.295841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.296040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.296073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.296253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.296284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.296456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.296486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.296668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.296701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.296895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.296927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.297078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.297112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.297385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.297417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.520 [2024-11-20 09:10:14.297593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.520 [2024-11-20 09:10:14.297624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.520 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.297895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.297926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.298140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.298173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.298294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.298326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.298575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.298606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.298786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.298817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.298995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.299029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.299162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.299193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.299375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.299407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.299580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.299611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.299815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.299846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.300052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.300085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.300280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.300311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.300532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.300563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.300680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.300712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.300897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.300928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.301080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.301113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.301303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.301335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.301510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.301542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.301807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.301839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.302010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.302044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.302293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.302324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.302569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.302600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.302838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.302870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.303071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.303105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.303234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.303266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.303450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.303481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.303726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.303757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.303883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.303913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.304138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.304171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.304391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.304423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.304613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.304644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.304914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.304955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.305243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.305275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.305447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.305478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.305614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.305645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.305851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.305883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.521 [2024-11-20 09:10:14.306154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.521 [2024-11-20 09:10:14.306186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.521 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.306475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.306513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.306640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.306671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.306942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.306986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.307163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.307193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.307419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.307450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.307592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.307622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.307806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.307837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.308018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.308051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.308294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.308325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.308565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.308595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.308767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.308797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.309021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.309053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.309335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.309366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.309561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.309593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.309881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.309912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.310167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.310200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.310377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.310409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.310528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.310559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.310735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.310767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.311035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.311068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.311279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.311311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.311481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.311512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.311704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.311735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.311847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.311879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.312003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.312037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.312213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.312246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.312378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.312409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.312661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.312692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.312933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.312974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.313173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.313206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.313379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.313409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.313683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.313714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.313899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.313931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.314063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.314094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.314337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.314369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.314645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.314676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.314800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.522 [2024-11-20 09:10:14.314831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.522 qpair failed and we were unable to recover it. 00:25:58.522 [2024-11-20 09:10:14.315007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.315038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.315284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.315316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.315513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.315544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.315727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.315764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.316049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.316082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.316333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.316364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.316543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.316574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.316710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.316742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.317000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.317034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.317228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.317260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.317436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.317468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.317641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.317672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.317938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.317978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.318169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.318202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.318371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.318402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.318588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.318619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.318732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.318762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.318896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.318928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.319167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.319200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.319333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.319364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.319593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.319623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.319742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.319774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.319970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.320004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.320175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.320207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.320392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.320423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.320530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.320562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.320821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.320853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.321030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.321065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.321247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.321278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.321446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.321478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.321748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.321780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.322005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.322038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.322172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.322202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.322379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.322410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.523 [2024-11-20 09:10:14.322533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.523 [2024-11-20 09:10:14.322565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.523 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.322690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.322721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.322833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.322863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.322990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.323024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.323265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.323296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.323537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.323569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.323769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.323801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.324112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.324146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.324315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.324347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.324541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.324577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.324842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.324874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.324998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.325032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.325306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.325338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.325522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.325553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.325677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.325708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.325899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.325931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.326120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.326153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.326340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.326372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.326543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.326574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.326704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.326735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.326973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.327006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.327266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.327299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.327545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.327575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.327766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.327798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.327928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.327969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.328205] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.328238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.328476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.328507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.328746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.328777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.329019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.329053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.329229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.329260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.329431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.329461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.329640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.329672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.329944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.329986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.330227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.330259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.330521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.330552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.330814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.330846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.331031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.524 [2024-11-20 09:10:14.331065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.524 qpair failed and we were unable to recover it. 00:25:58.524 [2024-11-20 09:10:14.331243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.331276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.331481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.331513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.331700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.331731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.331875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.331907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.332154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.332188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.332384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.332417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.332590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.332622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.332830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.332863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.333119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.333154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.333338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.333369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.333544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.333576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.333781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.333813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.333931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.333984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.334157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.334189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.334399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.334430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.334616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.334648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.334818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.334850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.335041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.335075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.335258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.335291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.335470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.335502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.335617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.335649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.335824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.335856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.336094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.336127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.336367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.336399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.336594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.336626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.336825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.336857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.337125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.337160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.337347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.337380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.337570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.337603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.337795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.337826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.337961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.337995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.338108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.338141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.338344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.338376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.338552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.338585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.338724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.338756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.338993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.339028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.339202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.339234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.339473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.525 [2024-11-20 09:10:14.339504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.525 qpair failed and we were unable to recover it. 00:25:58.525 [2024-11-20 09:10:14.339772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.339804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.340046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.340079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.340218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.340251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.340522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.340554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.340760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.340793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.340987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.341022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.341200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.341231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.341464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.341497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.341667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.341699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.341745] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:58.526 [2024-11-20 09:10:14.341835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.341867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.342130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.342164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.342438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.342469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.342707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.342739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.342874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.342906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.343130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.343163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.343424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.343455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.343644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.343676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.343819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.343851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.343977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.344010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.344190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.344221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.344480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.344512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.344634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.344666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.344772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.344803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.344939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.344982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.345155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.345187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.345389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.345421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.345551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.345583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.345698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.345736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.345935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.345977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.346224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.346256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.346469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.346502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.346627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.346658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.346844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.346876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.347075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.347108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.347223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.347255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.347444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.347477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.347688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.347720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.526 [2024-11-20 09:10:14.347905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.526 [2024-11-20 09:10:14.347937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.526 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.348219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.348252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.348570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.348602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.348791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.348823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.349023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.349056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.349270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.349302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.349494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.349526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.349790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.349821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.349985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.350018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.350195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.350228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.350477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.350510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.350746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.350780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.351046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.351081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.351202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.351234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.351351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.351385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.351579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.351611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.351805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.351838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.352104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.352139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.352309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.352341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.352512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.352546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.352676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.352709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.352906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.352940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.353219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.353252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.353435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.353467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.353654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.353688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.353931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.353973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.354147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.354178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.354294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.354325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.354516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.354549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.354737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.354768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.354935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.354994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.355170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.355202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.355336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.355368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.355628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.355661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.355785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.355817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.356025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.356059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.356186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.356218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.356335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.356367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.527 qpair failed and we were unable to recover it. 00:25:58.527 [2024-11-20 09:10:14.356501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.527 [2024-11-20 09:10:14.356533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.356714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.356747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.356934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.356975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.357232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.357264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.357505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.357537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.357650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.357682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.357959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.357993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.358251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.358284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.358470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.358501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.358688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.358720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.358841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.358873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.359062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.359097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.359275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.359306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.359418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.359450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.359587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.359618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.359906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.359966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.360202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.360234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.360426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.360457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.360588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.360620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.360682] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1b99af0 (9): Bad file descriptor 00:25:58.528 [2024-11-20 09:10:14.361028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.361101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.361340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.361410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.361693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.361757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.362024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.362063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.362346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.362379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.362652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.362684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.362898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.362930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.363179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.363212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.363314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.363345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.363522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.363554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.363789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.363820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.363991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.364026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.364270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.364301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.528 [2024-11-20 09:10:14.364427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.528 [2024-11-20 09:10:14.364459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.528 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.364740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.364772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.365038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.365073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.365192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.365223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.365406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.365438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.365623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.365655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.365785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.365818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.366004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.366039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.366174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.366207] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.366384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.366417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.366520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.366552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.366725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.366757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.366956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.366990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.367164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.367204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.367383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.367415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.367653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.367685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.367803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.367835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.368022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.368056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.368250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.368282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.368414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.368445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.368639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.368680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.368860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.368894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.369143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.369176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.369425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.369458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.369581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.369620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.369801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.369834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.370034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.370068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.370286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.370320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.370492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.370525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.370706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.370738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.370874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.370906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.371097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.371130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.371256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.371289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.371419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.371451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.371563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.371595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.371720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.371753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.372008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.372041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.372218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.372251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.529 [2024-11-20 09:10:14.372387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.529 [2024-11-20 09:10:14.372420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.529 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.372600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.372632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.372881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.372914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.373058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.373091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.373196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.373229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.373337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.373369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.373617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.373648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.373754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.373785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.373925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.373969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.374215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.374249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.374452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.374485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.374680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.374715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.374902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.374934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.375211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.375244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.375429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.375462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.375593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.375631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.375803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.375836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.376021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.376056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.376242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.376274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.376458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.376489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.376619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.376651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.376831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.376863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.377127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.377160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.377286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.377318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.377505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.377537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.377643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.377675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.377793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.377826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.378069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.378102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.378275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.378306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.378557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.378589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.378774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.378806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.379043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.379076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.379200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.379232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.379428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.379461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.379641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.379674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.379856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.379887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.380013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.380046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.380178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.380210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.530 [2024-11-20 09:10:14.380388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.530 [2024-11-20 09:10:14.380420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.530 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.380595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.380627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.380802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.380835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.381033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.381066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.381326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.381360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.381535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.381567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.381733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.381765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.381959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.381993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.382280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.382317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.382430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.382463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.382640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.382674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.382915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.382968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.383234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.383266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.383448] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:58.531 [2024-11-20 09:10:14.383456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.383475] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:58.531 [2024-11-20 09:10:14.383483] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:58.531 [2024-11-20 09:10:14.383491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:58.531 [2024-11-20 09:10:14.383488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 [2024-11-20 09:10:14.383499] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.383747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.383779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.383967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.384008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.384136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.384168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.384357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.384389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.384651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.384683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.384930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.384974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.385172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.385204] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.385177] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:25:58.531 [2024-11-20 09:10:14.385316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.385347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 [2024-11-20 09:10:14.385266] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.385351] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:25:58.531 [2024-11-20 09:10:14.385352] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:25:58.531 [2024-11-20 09:10:14.385482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.385513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.385754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.385785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.385915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.385957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.386067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.386100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.386295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.386329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.386511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.386551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.386735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.386769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.387011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.387046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.387226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.387258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.387430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.387463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.387588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.387620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.387796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.387829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.387939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.531 [2024-11-20 09:10:14.387980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.531 qpair failed and we were unable to recover it. 00:25:58.531 [2024-11-20 09:10:14.388103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.388136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.388321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.388355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.388619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.388652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.388783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.388816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.389003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.389037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.389158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.389190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.389438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.389470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.389641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.389674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.389782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.389815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.390007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.390039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.390162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.390194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.390319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.390352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.390526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.390558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.390767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.390800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.390906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.390937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.391156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.391189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.391430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.391461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.391650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.391682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.391857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.391888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.392105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.392154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.392312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.392364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.392564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.392597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.392727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.392758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.392938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.392985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.393116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.393148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.393336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.393368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.393488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.393519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.393625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.393657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.393785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.393817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.393988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.394022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.394133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.394164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.394354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.394386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.394550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.394591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.394697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.394729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.394899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.532 [2024-11-20 09:10:14.394931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.532 qpair failed and we were unable to recover it. 00:25:58.532 [2024-11-20 09:10:14.395123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.395155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.395275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.395307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.395410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.395442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.395549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.395580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.395768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.395800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.396053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.396087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.396274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.396305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.396414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.396446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.396614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.396646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.396832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.396865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.397129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.397162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.397343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.397375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.397512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.397545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.397728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.397760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.397967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.398001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.398183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.398215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.398390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.398421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.398592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.398625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.398807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.398839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.399010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.399044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.399159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.399191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.399362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.399395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.399577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.399610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.399837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.399869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.400159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.400202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.400403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.400438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.400560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.400594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.400765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.400798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.401012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.401048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.401232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.401266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.401446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.401479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.401653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.401686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.401876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.533 [2024-11-20 09:10:14.401910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.533 qpair failed and we were unable to recover it. 00:25:58.533 [2024-11-20 09:10:14.402043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.402077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.402328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.402361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.402548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.402581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.402767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.402800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.402921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.402962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.403081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.403114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.403375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.403409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.403675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.403711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.403900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.403933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.404098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.404144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.404400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.404436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.404573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.404605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.404734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.404766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.404969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.405004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.405268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.405301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.405443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.405477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.405661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.405693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.405868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.405901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.406168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.406203] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.406412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.406445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.406709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.406742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.406912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.406945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.407150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.407183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.407364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.407397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.407578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.407611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.407786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.407818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.407932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.407978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.408103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.408135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.408317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.408350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.408488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.408522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.408653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.408686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.408879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.408921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.409046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.409080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.409261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.409294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.409415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.409447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.409619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.534 [2024-11-20 09:10:14.409654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.534 qpair failed and we were unable to recover it. 00:25:58.534 [2024-11-20 09:10:14.409788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.409820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.409991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.410027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.410216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.410250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.410386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.410418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.410657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.410689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.410925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.410976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.411112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.411144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.411380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.411412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.411531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.411563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.411814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.411847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.411983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.412017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.412195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.412227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.412341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.412373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.412503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.412536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.412782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.412815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.412941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.412982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.413163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.413196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.413306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.413337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.413519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.413551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.413738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.413769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.413963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.413998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.414185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.414217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.414489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.414521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.414651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.414683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.414816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.414847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.414980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.415014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.415123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.415154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.415389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.415422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.415604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.415636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.415807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.415839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.416018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.416052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.416317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.416349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.416532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.416564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.416748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.416781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.416965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.416999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.417160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.417200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.535 [2024-11-20 09:10:14.417387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.535 [2024-11-20 09:10:14.417419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.535 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.417592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.417625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.417793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.417826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.417938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.417980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.418233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.418265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.418391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.418423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.418643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.418676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.418852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.418885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.419075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.419109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.419229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.419260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.419495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.419527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.419736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.419769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.419978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.420011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.420149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.420188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.420302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.420333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.420452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.420483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.420694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.420726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.420906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.420939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.421136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.421169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.421279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.421311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.421452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.421484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.421619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.421650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.421822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.421853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.422092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.422126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.422259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.422289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.422534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.422566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.422757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.422788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.423029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.423062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.423169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.423199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.423369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.423401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.423587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.423617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.423804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.423835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.424100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.424133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.424326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.424357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.424541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.424571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.424799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.424829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.424964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.424997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.425251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.425285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.536 qpair failed and we were unable to recover it. 00:25:58.536 [2024-11-20 09:10:14.425455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.536 [2024-11-20 09:10:14.425487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.425613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.425652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.425855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.425886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.426199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.426233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.426483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.426514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.426771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.426804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.427043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.427076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.427247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.427279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.427562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.427594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.427856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.427888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.428155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.428188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.428425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.428458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.428573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.428605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.428867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.428900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.429096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.429128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.429386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.429418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.429635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.429667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.429913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.429945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.430246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.430278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.430418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.430450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.430685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.430718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.430923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.430974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.431164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.431198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.431378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.431411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.431690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.431722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.431986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.432021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.432284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.432317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.432601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.432634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.432907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.432939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.433087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.433120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.433358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.433389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.433664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.433696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.433875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.537 [2024-11-20 09:10:14.433907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.537 qpair failed and we were unable to recover it. 00:25:58.537 [2024-11-20 09:10:14.434090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.434123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.434312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.434343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.434561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.434594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.434775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.434806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.434983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.435017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.435263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.435296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.435530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.435562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.435821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.435853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.436045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.436085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.436283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.436316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.436499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.436531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.436742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.436775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.437037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.437070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.437313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.437344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.437584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.437616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.437876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.437909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.438099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.438134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.438400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.438433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.438621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.438655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.438915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.438973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.439207] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.439239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.439448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.439480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.439744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.439777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.440066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.440101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.440371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.440403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.440581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.440612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.440730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.440761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.441023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.441057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.441346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.441379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.441664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.441696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.441968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.442002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.442270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.442303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.442561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.442593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.442701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.442732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.442956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.442989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.443184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.443217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.443486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.443519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.538 [2024-11-20 09:10:14.443761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.538 [2024-11-20 09:10:14.443792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.538 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.443988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.444023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.444285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.444316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.444505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.444536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.444801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.444832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.445096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.445129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.445335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.445366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.445653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.445684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.445918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.445956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.446217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.446248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.446516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.446548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.446828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.446866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.447134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.447166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.447351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.447382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.447512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.447544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.447805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.447836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.448048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.448082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.448273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.448305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.448560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.448591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.448772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.448803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.449067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.449100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.449296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.449326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.449451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.449483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.449766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.449798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.450034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.450067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.450336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.450369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.450634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.450667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.450976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.451009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.451259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.451291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.451527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.451559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.451818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.451850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.452115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.452148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.452434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.452467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.452669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.452700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.452883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.452914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.453187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.453219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.453477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.453508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.453798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.453828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.539 qpair failed and we were unable to recover it. 00:25:58.539 [2024-11-20 09:10:14.454010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.539 [2024-11-20 09:10:14.454043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.454291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.454322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.454558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.454589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.454848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.454881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.455063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.455096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.455358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.455389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.455598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.455630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.455817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.455848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.456114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.456148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.456430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.456461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.456737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.456769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.456973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.457005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.457282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.457313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.457524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.457561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.457822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.457853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.458094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.458126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.458305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.458336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.458563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.458595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.458856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.458886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.459156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.459189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.459360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.459391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.459648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.459680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.459914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.459945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.460158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.460190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.460386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.460420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.460653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.460684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.460867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.460899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.461190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.461223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.461509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.461540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.461781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.461813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.462001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.462035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.462303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.462334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.462505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.462538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.462773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.462803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.463039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.463072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.463188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.463219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.463428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.463459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.463780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.463811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.540 qpair failed and we were unable to recover it. 00:25:58.540 [2024-11-20 09:10:14.464068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.540 [2024-11-20 09:10:14.464101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.464340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.464371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.464569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.464600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.464885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.464916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.465188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.465221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.465507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.465538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.465792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.465824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.466000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.466032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.466237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.466269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.466531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.466561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.466680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.466711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.466897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.466929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.467126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.467158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.467333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.467365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.467623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.467654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.467874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.467912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.468130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.468163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.468427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.468458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.468714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.468745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.468925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.468964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.469249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.469281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.469552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.469583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.469870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.469901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.470112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.470144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.470323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.470355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.470554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.470585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.470824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.470856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.471094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.471127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.471365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.471397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.471643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.471673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.471933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.471986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.472223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.472255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.472546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.472577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.472818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.472850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.473120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.473154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.473438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.473469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.473650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.473681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.473932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.541 [2024-11-20 09:10:14.473970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.541 qpair failed and we were unable to recover it. 00:25:58.541 [2024-11-20 09:10:14.474142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.474173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.474435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.474466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.474751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.474783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.475036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.475069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.475305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.475370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.475653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.475688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.475890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.475923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.476222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.476256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.476513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.476545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.476792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.476823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.477040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.477075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.477281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.477313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.477522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.477554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.477810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.477841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.477978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.478012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.478194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.478227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.478413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.478444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.478659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.478692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.478964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.478999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.479256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.479288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.479536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.479568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.479738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.479770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.480004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.480036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.480296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.480329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.480609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.480642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.480872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.480904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.481161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.481194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.481387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.481420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.481605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.481636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.481890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.481922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.482223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.482255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.482570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.482607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.482851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.482883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.483124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.542 [2024-11-20 09:10:14.483156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.542 qpair failed and we were unable to recover it. 00:25:58.542 [2024-11-20 09:10:14.483421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.483452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 [2024-11-20 09:10:14.483733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.483764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 [2024-11-20 09:10:14.483968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.484001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 [2024-11-20 09:10:14.484237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.484270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 [2024-11-20 09:10:14.484573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.484604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 [2024-11-20 09:10:14.484878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.484910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 [2024-11-20 09:10:14.485120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.485153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 [2024-11-20 09:10:14.485342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.485372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 [2024-11-20 09:10:14.485608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.485639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 [2024-11-20 09:10:14.485880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.485911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 [2024-11-20 09:10:14.486230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.486263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 [2024-11-20 09:10:14.486455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.486487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 [2024-11-20 09:10:14.486614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.486645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 [2024-11-20 09:10:14.486906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.486937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 [2024-11-20 09:10:14.487150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.487182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 [2024-11-20 09:10:14.487386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.487418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 [2024-11-20 09:10:14.487667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.487698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 [2024-11-20 09:10:14.487869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.487901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 [2024-11-20 09:10:14.488184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.488218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 [2024-11-20 09:10:14.488497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.488528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 [2024-11-20 09:10:14.488798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.488830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:58.543 [2024-11-20 09:10:14.489017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.489055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 [2024-11-20 09:10:14.489240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.489271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:25:58.543 [2024-11-20 09:10:14.489528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.489561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:58.543 [2024-11-20 09:10:14.489770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.489804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 [2024-11-20 09:10:14.490038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:58.543 [2024-11-20 09:10:14.490072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 [2024-11-20 09:10:14.490258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.490289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:58.543 [2024-11-20 09:10:14.490530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.490563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 [2024-11-20 09:10:14.490750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.490782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 [2024-11-20 09:10:14.490977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.491009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 [2024-11-20 09:10:14.491263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.491295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 [2024-11-20 09:10:14.491533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.491565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 [2024-11-20 09:10:14.491825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.491858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 [2024-11-20 09:10:14.492037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.492068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 [2024-11-20 09:10:14.492251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.492284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.543 [2024-11-20 09:10:14.492512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.543 [2024-11-20 09:10:14.492562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.543 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.492839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.492873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.493139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.493174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.493386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.493418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.493687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.493719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.494007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.494042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.494178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.494211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.494453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.494487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.494774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.494807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.494937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.494980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.495237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.495269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.495485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.495517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.495757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.495788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.496051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.496092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.496280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.496313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.496568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.496599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.496810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.496843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.497105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.497139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.497432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.497464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.497769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.497801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.498058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.498093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.498234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.498267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.498403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.498435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.498691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.498724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.498911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.498944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.499213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.499246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.499416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.499448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.499587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.499620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.499880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.499913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.500128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.500163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.500423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.500457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.500642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.500675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.500782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.500813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.501006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.501041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.501285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.501317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.501437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.501469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.501740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.501774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.501973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.502008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.544 [2024-11-20 09:10:14.502254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.544 [2024-11-20 09:10:14.502287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.544 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.502576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.502608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.502796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.502838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.503041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.503076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.503219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.503252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.503457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.503489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.503765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.503797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.504076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.504109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.504381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.504414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.504656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.504687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.504868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.504901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.505168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.505201] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.505337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.505370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.505571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.505602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.505788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.505822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.506081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.506114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.506335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.506368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.506595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.506627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.506760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.506793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.507083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.507117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.507407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.507439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.507577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.507610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.507857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.507889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.508111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.508144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.508333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.508366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.508513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.508545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.508785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.508818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.509072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.509104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.509295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.509328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.509500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.509537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.509719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.509751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.509960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.509995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.510217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.510250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.510368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.510399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.510631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.510664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.510864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.510895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.511045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.511079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.511288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.511321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.511494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.545 [2024-11-20 09:10:14.511525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.545 qpair failed and we were unable to recover it. 00:25:58.545 [2024-11-20 09:10:14.511650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.511682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.511872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.511904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.512118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.512151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.512344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.512376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.512581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.512613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.512922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.512964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.513222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.513254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.513393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.513425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.513637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.513669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.513913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.513945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.514175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.514206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.514345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.514377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.514633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.514664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.514795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.514827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.514969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.515002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.515194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.515227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.515353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.515384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.515658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.515697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.515943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.515985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.516175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.516209] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.516345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.516378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.516620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.516653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.516839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.516871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.517052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.517085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.517270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.517301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.517439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.517471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.517758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.517790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.518093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.518126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.518257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.518288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.518477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.518511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.518720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.518750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.518937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.518980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.519123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.519154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.519326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.519357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.519537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.519568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.519852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.519885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.520102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.520135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.520253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.546 [2024-11-20 09:10:14.520285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.546 qpair failed and we were unable to recover it. 00:25:58.546 [2024-11-20 09:10:14.520480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.547 [2024-11-20 09:10:14.520511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.547 qpair failed and we were unable to recover it. 00:25:58.547 [2024-11-20 09:10:14.520692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.547 [2024-11-20 09:10:14.520723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.547 qpair failed and we were unable to recover it. 00:25:58.547 [2024-11-20 09:10:14.520990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.547 [2024-11-20 09:10:14.521021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.547 qpair failed and we were unable to recover it. 00:25:58.547 [2024-11-20 09:10:14.521211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.547 [2024-11-20 09:10:14.521243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.547 qpair failed and we were unable to recover it. 00:25:58.547 [2024-11-20 09:10:14.521374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.547 [2024-11-20 09:10:14.521404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.547 qpair failed and we were unable to recover it. 00:25:58.547 [2024-11-20 09:10:14.521534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.547 [2024-11-20 09:10:14.521566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.547 qpair failed and we were unable to recover it. 00:25:58.547 [2024-11-20 09:10:14.521801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.547 [2024-11-20 09:10:14.521839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.547 qpair failed and we were unable to recover it. 00:25:58.547 [2024-11-20 09:10:14.522103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.547 [2024-11-20 09:10:14.522136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.547 qpair failed and we were unable to recover it. 00:25:58.547 [2024-11-20 09:10:14.522263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.547 [2024-11-20 09:10:14.522295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.547 qpair failed and we were unable to recover it. 00:25:58.547 [2024-11-20 09:10:14.522533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.547 [2024-11-20 09:10:14.522565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.547 qpair failed and we were unable to recover it. 00:25:58.547 [2024-11-20 09:10:14.522758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.547 [2024-11-20 09:10:14.522790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.547 qpair failed and we were unable to recover it. 00:25:58.547 [2024-11-20 09:10:14.522984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.547 [2024-11-20 09:10:14.523019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.547 qpair failed and we were unable to recover it. 00:25:58.547 [2024-11-20 09:10:14.523161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.547 [2024-11-20 09:10:14.523193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.547 qpair failed and we were unable to recover it. 00:25:58.547 [2024-11-20 09:10:14.523307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.547 [2024-11-20 09:10:14.523339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.547 qpair failed and we were unable to recover it. 00:25:58.547 [2024-11-20 09:10:14.523584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.547 [2024-11-20 09:10:14.523615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.547 qpair failed and we were unable to recover it. 00:25:58.547 [2024-11-20 09:10:14.523808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.547 [2024-11-20 09:10:14.523840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.547 qpair failed and we were unable to recover it. 00:25:58.547 [2024-11-20 09:10:14.524036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.547 [2024-11-20 09:10:14.524069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.547 qpair failed and we were unable to recover it. 00:25:58.547 [2024-11-20 09:10:14.524204] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.547 [2024-11-20 09:10:14.524237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.547 qpair failed and we were unable to recover it. 00:25:58.547 [2024-11-20 09:10:14.524428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.547 [2024-11-20 09:10:14.524460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.547 qpair failed and we were unable to recover it. 00:25:58.547 [2024-11-20 09:10:14.524715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.547 [2024-11-20 09:10:14.524747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.547 qpair failed and we were unable to recover it. 00:25:58.547 [2024-11-20 09:10:14.525027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.547 [2024-11-20 09:10:14.525087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.547 qpair failed and we were unable to recover it. 00:25:58.547 [2024-11-20 09:10:14.525294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.547 [2024-11-20 09:10:14.525333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.547 qpair failed and we were unable to recover it. 00:25:58.547 [2024-11-20 09:10:14.525538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.547 [2024-11-20 09:10:14.525570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.547 qpair failed and we were unable to recover it. 00:25:58.547 [2024-11-20 09:10:14.525762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.547 [2024-11-20 09:10:14.525801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.547 qpair failed and we were unable to recover it. 00:25:58.547 [2024-11-20 09:10:14.525990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.547 [2024-11-20 09:10:14.526025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.547 qpair failed and we were unable to recover it. 00:25:58.547 [2024-11-20 09:10:14.526161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.547 [2024-11-20 09:10:14.526192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.547 qpair failed and we were unable to recover it. 00:25:58.547 [2024-11-20 09:10:14.526380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.547 [2024-11-20 09:10:14.526412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.547 qpair failed and we were unable to recover it. 00:25:58.547 [2024-11-20 09:10:14.526551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.547 [2024-11-20 09:10:14.526585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.547 qpair failed and we were unable to recover it. 00:25:58.547 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:58.547 [2024-11-20 09:10:14.526883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.547 [2024-11-20 09:10:14.526918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.547 qpair failed and we were unable to recover it. 00:25:58.547 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:58.547 [2024-11-20 09:10:14.527184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.547 [2024-11-20 09:10:14.527220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.547 qpair failed and we were unable to recover it. 00:25:58.547 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.547 [2024-11-20 09:10:14.527410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.547 [2024-11-20 09:10:14.527444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.547 qpair failed and we were unable to recover it. 00:25:58.812 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:58.812 [2024-11-20 09:10:14.527686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.812 [2024-11-20 09:10:14.527725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.812 qpair failed and we were unable to recover it. 00:25:58.812 [2024-11-20 09:10:14.527908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.812 [2024-11-20 09:10:14.527940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.812 qpair failed and we were unable to recover it. 00:25:58.812 [2024-11-20 09:10:14.528142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.812 [2024-11-20 09:10:14.528174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.812 qpair failed and we were unable to recover it. 00:25:58.812 [2024-11-20 09:10:14.528362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.812 [2024-11-20 09:10:14.528395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.812 qpair failed and we were unable to recover it. 00:25:58.812 [2024-11-20 09:10:14.528662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.812 [2024-11-20 09:10:14.528693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.812 qpair failed and we were unable to recover it. 00:25:58.812 [2024-11-20 09:10:14.528960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.812 [2024-11-20 09:10:14.528994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.812 qpair failed and we were unable to recover it. 00:25:58.812 [2024-11-20 09:10:14.529254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.812 [2024-11-20 09:10:14.529285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.812 qpair failed and we were unable to recover it. 00:25:58.812 [2024-11-20 09:10:14.529428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.812 [2024-11-20 09:10:14.529460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.812 qpair failed and we were unable to recover it. 00:25:58.812 [2024-11-20 09:10:14.529705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.812 [2024-11-20 09:10:14.529736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.812 qpair failed and we were unable to recover it. 00:25:58.812 [2024-11-20 09:10:14.529974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.812 [2024-11-20 09:10:14.530009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.812 qpair failed and we were unable to recover it. 00:25:58.812 [2024-11-20 09:10:14.530231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.812 [2024-11-20 09:10:14.530262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.812 qpair failed and we were unable to recover it. 00:25:58.812 [2024-11-20 09:10:14.530452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.812 [2024-11-20 09:10:14.530483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.812 qpair failed and we were unable to recover it. 00:25:58.812 [2024-11-20 09:10:14.530691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.812 [2024-11-20 09:10:14.530723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.812 qpair failed and we were unable to recover it. 00:25:58.812 [2024-11-20 09:10:14.530836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.812 [2024-11-20 09:10:14.530867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1b8bba0 with addr=10.0.0.2, port=4420 00:25:58.812 qpair failed and we were unable to recover it. 00:25:58.812 [2024-11-20 09:10:14.531166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.812 [2024-11-20 09:10:14.531211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.812 qpair failed and we were unable to recover it. 00:25:58.812 [2024-11-20 09:10:14.531459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.812 [2024-11-20 09:10:14.531492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.812 qpair failed and we were unable to recover it. 00:25:58.812 [2024-11-20 09:10:14.531733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.812 [2024-11-20 09:10:14.531765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.812 qpair failed and we were unable to recover it. 00:25:58.812 [2024-11-20 09:10:14.531967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.812 [2024-11-20 09:10:14.532000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.812 qpair failed and we were unable to recover it. 00:25:58.812 [2024-11-20 09:10:14.532206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.812 [2024-11-20 09:10:14.532238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.812 qpair failed and we were unable to recover it. 00:25:58.812 [2024-11-20 09:10:14.532428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.812 [2024-11-20 09:10:14.532461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.812 qpair failed and we were unable to recover it. 00:25:58.812 [2024-11-20 09:10:14.532668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.812 [2024-11-20 09:10:14.532700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.812 qpair failed and we were unable to recover it. 00:25:58.812 [2024-11-20 09:10:14.532877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.812 [2024-11-20 09:10:14.532908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.812 qpair failed and we were unable to recover it. 00:25:58.812 [2024-11-20 09:10:14.533060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.812 [2024-11-20 09:10:14.533094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.812 qpair failed and we were unable to recover it. 00:25:58.812 [2024-11-20 09:10:14.533278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.812 [2024-11-20 09:10:14.533310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.812 qpair failed and we were unable to recover it. 00:25:58.812 [2024-11-20 09:10:14.533534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.812 [2024-11-20 09:10:14.533567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.812 qpair failed and we were unable to recover it. 00:25:58.812 [2024-11-20 09:10:14.533803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.812 [2024-11-20 09:10:14.533836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.812 qpair failed and we were unable to recover it. 00:25:58.812 [2024-11-20 09:10:14.534024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.812 [2024-11-20 09:10:14.534058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.812 qpair failed and we were unable to recover it. 00:25:58.812 [2024-11-20 09:10:14.534294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.812 [2024-11-20 09:10:14.534334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.812 qpair failed and we were unable to recover it. 00:25:58.812 [2024-11-20 09:10:14.534598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.812 [2024-11-20 09:10:14.534631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.812 qpair failed and we were unable to recover it. 00:25:58.812 [2024-11-20 09:10:14.534830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.812 [2024-11-20 09:10:14.534862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.812 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.535067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.535101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.535307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.535339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.535603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.535636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.535910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.535942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.536158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.536189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.536380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.536413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.536602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.536635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.536826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.536858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.537061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.537095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.537282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.537315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.537551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.537583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.537808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.537839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.538027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.538061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.538251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.538283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.538405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.538438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.538704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.538737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.538974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.539007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.539221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.539254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.539466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.539499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.539783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.539815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.540084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.540117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.540340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.540373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.540499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.540531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.540733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.540765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c28000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.541069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.541131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.541330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.541363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.541636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.541669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.541842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.541875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.542114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.542146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.542330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.542362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.542581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.542612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.542816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.542847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.543032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.543066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.543256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.543289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.543578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.543610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.543891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.543922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.544117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.813 [2024-11-20 09:10:14.544150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.813 qpair failed and we were unable to recover it. 00:25:58.813 [2024-11-20 09:10:14.544343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.544384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.544523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.544554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.544758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.544789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.544975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.545008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.545195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.545227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.545410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.545441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.545726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.545757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.545939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.545983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.546171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.546202] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.546442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.546474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.546658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.546689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.546924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.546966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.547146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.547178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.547355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.547387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.547632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.547664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.547965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.547997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.548188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.548219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.548499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.548531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.548811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.548842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.549029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.549064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.549306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.549338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.549521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.549553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.549740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.549772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.550061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.550094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.550277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.550308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.550492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.550524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.550795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.550827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.551121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.551174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.551396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.551430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.551693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.551725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.552008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.552043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.552288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.552321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.552558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.552591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.552878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.552910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.553184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.553218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.553454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.553486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.553711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.553743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.814 [2024-11-20 09:10:14.553996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.814 [2024-11-20 09:10:14.554031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.814 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.554241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.554271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.554480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.554511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.554753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.554794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.554919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.554957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.555145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.555177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.555377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.555408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.555583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.555615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.555858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.555890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.556151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.556184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.556470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.556502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.556771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.556802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.557092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.557125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.557415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.557447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.557690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.557722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 Malloc0 00:25:58.815 [2024-11-20 09:10:14.557915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.557956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.558216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.558248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.558537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.558570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.815 [2024-11-20 09:10:14.558819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.558851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:58.815 [2024-11-20 09:10:14.559039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.559073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.559340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.815 [2024-11-20 09:10:14.559373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:58.815 [2024-11-20 09:10:14.559634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.559668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.559959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.559992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.560184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.560217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.560392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.560424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.560630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.560662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.560784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.560817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.560942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.560982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.561245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.561282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.561571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.561604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.561853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.561885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.562140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.562173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.562384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.562415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.562595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.562627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.562747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.562778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.562910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.562942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.815 [2024-11-20 09:10:14.563240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.815 [2024-11-20 09:10:14.563273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.815 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.563556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.816 [2024-11-20 09:10:14.563589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.816 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.563852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.816 [2024-11-20 09:10:14.563881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.816 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.564136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.816 [2024-11-20 09:10:14.564170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.816 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.564427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.816 [2024-11-20 09:10:14.564460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.816 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.564746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.816 [2024-11-20 09:10:14.564778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.816 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.565046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.816 [2024-11-20 09:10:14.565079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.816 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.565371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.816 [2024-11-20 09:10:14.565381] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:58.816 [2024-11-20 09:10:14.565402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.816 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.565543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.816 [2024-11-20 09:10:14.565575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.816 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.565839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.816 [2024-11-20 09:10:14.565871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.816 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.566077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.816 [2024-11-20 09:10:14.566110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.816 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.566241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.816 [2024-11-20 09:10:14.566273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.816 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.566452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.816 [2024-11-20 09:10:14.566484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.816 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.566655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.816 [2024-11-20 09:10:14.566688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.816 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.566924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.816 [2024-11-20 09:10:14.566963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.816 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.567153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.816 [2024-11-20 09:10:14.567186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.816 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.567450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.816 [2024-11-20 09:10:14.567483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.816 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.567737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.816 [2024-11-20 09:10:14.567768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.816 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.568024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.816 [2024-11-20 09:10:14.568058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.816 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.568279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.816 [2024-11-20 09:10:14.568312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.816 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.568572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.816 [2024-11-20 09:10:14.568604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.816 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.568905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.816 [2024-11-20 09:10:14.568936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.816 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.569195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.816 [2024-11-20 09:10:14.569228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.816 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.569359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.816 [2024-11-20 09:10:14.569391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.816 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.569571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.816 [2024-11-20 09:10:14.569602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.816 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.569860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.816 [2024-11-20 09:10:14.569892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.816 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.570134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.816 [2024-11-20 09:10:14.570167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.816 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.570379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.816 [2024-11-20 09:10:14.570410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.816 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.570693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.816 [2024-11-20 09:10:14.570725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.816 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.570965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.816 [2024-11-20 09:10:14.570998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.816 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.571235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.816 [2024-11-20 09:10:14.571267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.816 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.571399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.816 [2024-11-20 09:10:14.571431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c1c000b90 with addr=10.0.0.2, port=4420 00:25:58.816 qpair failed and we were unable to recover it. 00:25:58.816 [2024-11-20 09:10:14.571692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.571729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.572001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.572034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.572298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.572331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.572623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.572656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.572871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.572902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.573173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.573205] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.573404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.573435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.573642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.573673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.573915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.573946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.817 [2024-11-20 09:10:14.574161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.574192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.574451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.574482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.574668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.574699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.817 [2024-11-20 09:10:14.574990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.575024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:58.817 [2024-11-20 09:10:14.575285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.575319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.575450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.575481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.575721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.575751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.575941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.575985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.576126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.576158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.576338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.576369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.576654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.576685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.576959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.576994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.577198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.577229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.577357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.577390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.577521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.577554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.577813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.577845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.578110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.578144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.578386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.578418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.578631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.578663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.578923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.578961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.579202] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.579234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.579495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.579526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.579764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.579796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.580061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.580094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.580354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.580385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.580569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.580600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.580863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.817 [2024-11-20 09:10:14.580895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.817 qpair failed and we were unable to recover it. 00:25:58.817 [2024-11-20 09:10:14.581186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.581218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.581410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.581441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.581558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.581594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.581829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.581861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.582114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.818 [2024-11-20 09:10:14.582148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.582409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.582440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:58.818 [2024-11-20 09:10:14.582550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.582581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.582820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.818 [2024-11-20 09:10:14.582851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.583054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.583088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:58.818 [2024-11-20 09:10:14.583265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.583296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.583494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.583526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.583790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.583820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.583931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.583976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.584183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.584214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.584393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.584424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.584694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.584725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.585009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.585043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.585233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.585264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.585438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.585469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.585654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.585685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.585980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.586013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.586215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.586247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.586442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.586474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.586731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.586762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.586929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.586967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.587174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.587206] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.587467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.587498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.587792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.587826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.588092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.588128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.588389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.588421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.588617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.588653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.588885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.588921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.589210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.589243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.589430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.589463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.589667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.589698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.589893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.589928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.818 qpair failed and we were unable to recover it. 00:25:58.818 [2024-11-20 09:10:14.590222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.818 [2024-11-20 09:10:14.590256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.819 qpair failed and we were unable to recover it. 00:25:58.819 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.819 [2024-11-20 09:10:14.590455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.819 [2024-11-20 09:10:14.590486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.819 qpair failed and we were unable to recover it. 00:25:58.819 [2024-11-20 09:10:14.590672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.819 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:58.819 [2024-11-20 09:10:14.590703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.819 qpair failed and we were unable to recover it. 00:25:58.819 [2024-11-20 09:10:14.590901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.819 [2024-11-20 09:10:14.590940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.819 qpair failed and we were unable to recover it. 00:25:58.819 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.819 [2024-11-20 09:10:14.591214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.819 [2024-11-20 09:10:14.591246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.819 qpair failed and we were unable to recover it. 00:25:58.819 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:58.819 [2024-11-20 09:10:14.591382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.819 [2024-11-20 09:10:14.591415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.819 qpair failed and we were unable to recover it. 00:25:58.819 [2024-11-20 09:10:14.591671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.819 [2024-11-20 09:10:14.591703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.819 qpair failed and we were unable to recover it. 00:25:58.819 [2024-11-20 09:10:14.591941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.819 [2024-11-20 09:10:14.591984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.819 qpair failed and we were unable to recover it. 00:25:58.819 [2024-11-20 09:10:14.592168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.819 [2024-11-20 09:10:14.592199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.819 qpair failed and we were unable to recover it. 00:25:58.819 [2024-11-20 09:10:14.592456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.819 [2024-11-20 09:10:14.592487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.819 qpair failed and we were unable to recover it. 00:25:58.819 [2024-11-20 09:10:14.592777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.819 [2024-11-20 09:10:14.592809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.819 qpair failed and we were unable to recover it. 00:25:58.819 [2024-11-20 09:10:14.593001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.819 [2024-11-20 09:10:14.593034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.819 qpair failed and we were unable to recover it. 00:25:58.819 [2024-11-20 09:10:14.593318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.819 [2024-11-20 09:10:14.593350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.819 qpair failed and we were unable to recover it. 00:25:58.819 [2024-11-20 09:10:14.593595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.819 [2024-11-20 09:10:14.593626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.819 qpair failed and we were unable to recover it. 00:25:58.819 [2024-11-20 09:10:14.593755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.819 [2024-11-20 09:10:14.593787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.819 qpair failed and we were unable to recover it. 00:25:58.819 [2024-11-20 09:10:14.593890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.819 [2024-11-20 09:10:14.593921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.819 qpair failed and we were unable to recover it. 00:25:58.819 [2024-11-20 09:10:14.594216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.819 [2024-11-20 09:10:14.594249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.819 qpair failed and we were unable to recover it. 00:25:58.819 [2024-11-20 09:10:14.594492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.819 [2024-11-20 09:10:14.594524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.819 qpair failed and we were unable to recover it. 00:25:58.819 [2024-11-20 09:10:14.594767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.819 [2024-11-20 09:10:14.594798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.819 qpair failed and we were unable to recover it. 00:25:58.819 [2024-11-20 09:10:14.595054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.819 [2024-11-20 09:10:14.595087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.819 qpair failed and we were unable to recover it. 00:25:58.819 [2024-11-20 09:10:14.595297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.819 [2024-11-20 09:10:14.595329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.819 qpair failed and we were unable to recover it. 00:25:58.819 [2024-11-20 09:10:14.595573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.819 [2024-11-20 09:10:14.595604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.819 qpair failed and we were unable to recover it. 00:25:58.819 [2024-11-20 09:10:14.595872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.819 [2024-11-20 09:10:14.595904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.819 qpair failed and we were unable to recover it. 00:25:58.819 [2024-11-20 09:10:14.596197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.819 [2024-11-20 09:10:14.596230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.819 qpair failed and we were unable to recover it. 00:25:58.819 [2024-11-20 09:10:14.596493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.819 [2024-11-20 09:10:14.596524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.819 qpair failed and we were unable to recover it. 00:25:58.819 [2024-11-20 09:10:14.596787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.819 [2024-11-20 09:10:14.596818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.819 qpair failed and we were unable to recover it. 00:25:58.819 [2024-11-20 09:10:14.597068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.819 [2024-11-20 09:10:14.597102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.819 qpair failed and we were unable to recover it. 00:25:58.819 [2024-11-20 09:10:14.597354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:58.819 [2024-11-20 09:10:14.597386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f1c20000b90 with addr=10.0.0.2, port=4420 00:25:58.819 qpair failed and we were unable to recover it. 00:25:58.819 [2024-11-20 09:10:14.597653] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:58.819 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.819 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:58.819 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:58.819 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:58.819 [2024-11-20 09:10:14.606171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.819 [2024-11-20 09:10:14.606320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.819 [2024-11-20 09:10:14.606368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.819 [2024-11-20 09:10:14.606390] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.819 [2024-11-20 09:10:14.606413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:58.819 [2024-11-20 09:10:14.606466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:58.819 qpair failed and we were unable to recover it. 00:25:58.819 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:58.819 09:10:14 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2479376 00:25:58.819 [2024-11-20 09:10:14.616025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.819 [2024-11-20 09:10:14.616107] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.819 [2024-11-20 09:10:14.616134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.819 [2024-11-20 09:10:14.616148] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.819 [2024-11-20 09:10:14.616162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:58.820 [2024-11-20 09:10:14.616194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:58.820 qpair failed and we were unable to recover it. 00:25:58.820 [2024-11-20 09:10:14.626006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.820 [2024-11-20 09:10:14.626084] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.820 [2024-11-20 09:10:14.626103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.820 [2024-11-20 09:10:14.626113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.820 [2024-11-20 09:10:14.626121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:58.820 [2024-11-20 09:10:14.626144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:58.820 qpair failed and we were unable to recover it. 00:25:58.820 [2024-11-20 09:10:14.636009] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.820 [2024-11-20 09:10:14.636070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.820 [2024-11-20 09:10:14.636085] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.820 [2024-11-20 09:10:14.636092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.820 [2024-11-20 09:10:14.636101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:58.820 [2024-11-20 09:10:14.636117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:58.820 qpair failed and we were unable to recover it. 00:25:58.820 [2024-11-20 09:10:14.646059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.820 [2024-11-20 09:10:14.646166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.820 [2024-11-20 09:10:14.646179] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.820 [2024-11-20 09:10:14.646186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.820 [2024-11-20 09:10:14.646193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:58.820 [2024-11-20 09:10:14.646207] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:58.820 qpair failed and we were unable to recover it. 00:25:58.820 [2024-11-20 09:10:14.656078] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.820 [2024-11-20 09:10:14.656133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.820 [2024-11-20 09:10:14.656148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.820 [2024-11-20 09:10:14.656155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.820 [2024-11-20 09:10:14.656161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:58.820 [2024-11-20 09:10:14.656176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:58.820 qpair failed and we were unable to recover it. 00:25:58.820 [2024-11-20 09:10:14.666034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.820 [2024-11-20 09:10:14.666087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.820 [2024-11-20 09:10:14.666102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.820 [2024-11-20 09:10:14.666109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.820 [2024-11-20 09:10:14.666114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:58.820 [2024-11-20 09:10:14.666129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:58.820 qpair failed and we were unable to recover it. 00:25:58.820 [2024-11-20 09:10:14.676061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.820 [2024-11-20 09:10:14.676118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.820 [2024-11-20 09:10:14.676132] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.820 [2024-11-20 09:10:14.676138] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.820 [2024-11-20 09:10:14.676144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:58.820 [2024-11-20 09:10:14.676159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:58.820 qpair failed and we were unable to recover it. 00:25:58.820 [2024-11-20 09:10:14.686121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.820 [2024-11-20 09:10:14.686178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.820 [2024-11-20 09:10:14.686192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.820 [2024-11-20 09:10:14.686199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.820 [2024-11-20 09:10:14.686205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:58.820 [2024-11-20 09:10:14.686220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:58.820 qpair failed and we were unable to recover it. 00:25:58.820 [2024-11-20 09:10:14.696161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.820 [2024-11-20 09:10:14.696216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.820 [2024-11-20 09:10:14.696230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.820 [2024-11-20 09:10:14.696237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.820 [2024-11-20 09:10:14.696243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:58.820 [2024-11-20 09:10:14.696258] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:58.820 qpair failed and we were unable to recover it. 00:25:58.820 [2024-11-20 09:10:14.706170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.820 [2024-11-20 09:10:14.706225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.820 [2024-11-20 09:10:14.706239] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.820 [2024-11-20 09:10:14.706246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.820 [2024-11-20 09:10:14.706252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:58.820 [2024-11-20 09:10:14.706268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:58.820 qpair failed and we were unable to recover it. 00:25:58.820 [2024-11-20 09:10:14.716198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.820 [2024-11-20 09:10:14.716264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.820 [2024-11-20 09:10:14.716278] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.820 [2024-11-20 09:10:14.716285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.820 [2024-11-20 09:10:14.716291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:58.820 [2024-11-20 09:10:14.716305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:58.820 qpair failed and we were unable to recover it. 00:25:58.820 [2024-11-20 09:10:14.726214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.820 [2024-11-20 09:10:14.726268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.820 [2024-11-20 09:10:14.726286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.820 [2024-11-20 09:10:14.726292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.820 [2024-11-20 09:10:14.726299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:58.820 [2024-11-20 09:10:14.726314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:58.820 qpair failed and we were unable to recover it. 00:25:58.820 [2024-11-20 09:10:14.736244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.820 [2024-11-20 09:10:14.736300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.820 [2024-11-20 09:10:14.736313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.820 [2024-11-20 09:10:14.736319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.820 [2024-11-20 09:10:14.736326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:58.820 [2024-11-20 09:10:14.736340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:58.820 qpair failed and we were unable to recover it. 00:25:58.820 [2024-11-20 09:10:14.746269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.820 [2024-11-20 09:10:14.746323] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.820 [2024-11-20 09:10:14.746336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.820 [2024-11-20 09:10:14.746343] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.820 [2024-11-20 09:10:14.746348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:58.820 [2024-11-20 09:10:14.746363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:58.820 qpair failed and we were unable to recover it. 00:25:58.821 [2024-11-20 09:10:14.756337] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.821 [2024-11-20 09:10:14.756394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.821 [2024-11-20 09:10:14.756408] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.821 [2024-11-20 09:10:14.756415] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.821 [2024-11-20 09:10:14.756421] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:58.821 [2024-11-20 09:10:14.756435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:58.821 qpair failed and we were unable to recover it. 00:25:58.821 [2024-11-20 09:10:14.766309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.821 [2024-11-20 09:10:14.766369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.821 [2024-11-20 09:10:14.766396] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.821 [2024-11-20 09:10:14.766403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.821 [2024-11-20 09:10:14.766411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:58.821 [2024-11-20 09:10:14.766432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:58.821 qpair failed and we were unable to recover it. 00:25:58.821 [2024-11-20 09:10:14.776379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.821 [2024-11-20 09:10:14.776442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.821 [2024-11-20 09:10:14.776456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.821 [2024-11-20 09:10:14.776462] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.821 [2024-11-20 09:10:14.776468] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:58.821 [2024-11-20 09:10:14.776483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:58.821 qpair failed and we were unable to recover it. 00:25:58.821 [2024-11-20 09:10:14.786387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.821 [2024-11-20 09:10:14.786450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.821 [2024-11-20 09:10:14.786463] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.821 [2024-11-20 09:10:14.786470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.821 [2024-11-20 09:10:14.786476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:58.821 [2024-11-20 09:10:14.786491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:58.821 qpair failed and we were unable to recover it. 00:25:58.821 [2024-11-20 09:10:14.796405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.821 [2024-11-20 09:10:14.796461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.821 [2024-11-20 09:10:14.796474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.821 [2024-11-20 09:10:14.796480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.821 [2024-11-20 09:10:14.796486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:58.821 [2024-11-20 09:10:14.796501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:58.821 qpair failed and we were unable to recover it. 00:25:58.821 [2024-11-20 09:10:14.806454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.821 [2024-11-20 09:10:14.806508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.821 [2024-11-20 09:10:14.806521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.821 [2024-11-20 09:10:14.806528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.821 [2024-11-20 09:10:14.806534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:58.821 [2024-11-20 09:10:14.806548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:58.821 qpair failed and we were unable to recover it. 00:25:58.821 [2024-11-20 09:10:14.816396] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.821 [2024-11-20 09:10:14.816453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.821 [2024-11-20 09:10:14.816466] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.821 [2024-11-20 09:10:14.816472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.821 [2024-11-20 09:10:14.816478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:58.821 [2024-11-20 09:10:14.816493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:58.821 qpair failed and we were unable to recover it. 00:25:58.821 [2024-11-20 09:10:14.826498] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.821 [2024-11-20 09:10:14.826555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.821 [2024-11-20 09:10:14.826567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.821 [2024-11-20 09:10:14.826575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.821 [2024-11-20 09:10:14.826580] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:58.821 [2024-11-20 09:10:14.826595] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:58.821 qpair failed and we were unable to recover it. 00:25:58.821 [2024-11-20 09:10:14.836527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:58.821 [2024-11-20 09:10:14.836582] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:58.821 [2024-11-20 09:10:14.836596] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:58.821 [2024-11-20 09:10:14.836602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:58.821 [2024-11-20 09:10:14.836608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:58.821 [2024-11-20 09:10:14.836623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:58.821 qpair failed and we were unable to recover it. 00:25:59.081 [2024-11-20 09:10:14.846569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.082 [2024-11-20 09:10:14.846631] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.082 [2024-11-20 09:10:14.846646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.082 [2024-11-20 09:10:14.846653] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.082 [2024-11-20 09:10:14.846658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.082 [2024-11-20 09:10:14.846674] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.082 qpair failed and we were unable to recover it. 00:25:59.082 [2024-11-20 09:10:14.856572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.082 [2024-11-20 09:10:14.856623] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.082 [2024-11-20 09:10:14.856640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.082 [2024-11-20 09:10:14.856647] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.082 [2024-11-20 09:10:14.856653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.082 [2024-11-20 09:10:14.856668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.082 qpair failed and we were unable to recover it. 00:25:59.082 [2024-11-20 09:10:14.866665] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.082 [2024-11-20 09:10:14.866723] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.082 [2024-11-20 09:10:14.866737] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.082 [2024-11-20 09:10:14.866744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.082 [2024-11-20 09:10:14.866750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.082 [2024-11-20 09:10:14.866765] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.082 qpair failed and we were unable to recover it. 00:25:59.082 [2024-11-20 09:10:14.876656] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.082 [2024-11-20 09:10:14.876713] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.082 [2024-11-20 09:10:14.876726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.082 [2024-11-20 09:10:14.876732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.082 [2024-11-20 09:10:14.876738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.082 [2024-11-20 09:10:14.876753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.082 qpair failed and we were unable to recover it. 00:25:59.082 [2024-11-20 09:10:14.886696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.082 [2024-11-20 09:10:14.886753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.082 [2024-11-20 09:10:14.886766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.082 [2024-11-20 09:10:14.886773] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.082 [2024-11-20 09:10:14.886779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.082 [2024-11-20 09:10:14.886794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.082 qpair failed and we were unable to recover it. 00:25:59.082 [2024-11-20 09:10:14.896681] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.082 [2024-11-20 09:10:14.896737] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.082 [2024-11-20 09:10:14.896751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.082 [2024-11-20 09:10:14.896761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.082 [2024-11-20 09:10:14.896767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.082 [2024-11-20 09:10:14.896781] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.082 qpair failed and we were unable to recover it. 00:25:59.082 [2024-11-20 09:10:14.906650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.082 [2024-11-20 09:10:14.906707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.082 [2024-11-20 09:10:14.906721] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.082 [2024-11-20 09:10:14.906728] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.082 [2024-11-20 09:10:14.906734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.082 [2024-11-20 09:10:14.906749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.082 qpair failed and we were unable to recover it. 00:25:59.082 [2024-11-20 09:10:14.916759] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.082 [2024-11-20 09:10:14.916816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.082 [2024-11-20 09:10:14.916829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.082 [2024-11-20 09:10:14.916836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.082 [2024-11-20 09:10:14.916842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.082 [2024-11-20 09:10:14.916857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.082 qpair failed and we were unable to recover it. 00:25:59.082 [2024-11-20 09:10:14.926830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.082 [2024-11-20 09:10:14.926886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.082 [2024-11-20 09:10:14.926900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.082 [2024-11-20 09:10:14.926906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.082 [2024-11-20 09:10:14.926913] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.082 [2024-11-20 09:10:14.926928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.082 qpair failed and we were unable to recover it. 00:25:59.082 [2024-11-20 09:10:14.936737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.082 [2024-11-20 09:10:14.936791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.082 [2024-11-20 09:10:14.936805] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.082 [2024-11-20 09:10:14.936811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.082 [2024-11-20 09:10:14.936817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.082 [2024-11-20 09:10:14.936832] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.082 qpair failed and we were unable to recover it. 00:25:59.082 [2024-11-20 09:10:14.946872] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.082 [2024-11-20 09:10:14.946932] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.082 [2024-11-20 09:10:14.946946] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.082 [2024-11-20 09:10:14.946956] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.082 [2024-11-20 09:10:14.946962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.082 [2024-11-20 09:10:14.946978] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.082 qpair failed and we were unable to recover it. 00:25:59.082 [2024-11-20 09:10:14.956862] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.082 [2024-11-20 09:10:14.956919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.082 [2024-11-20 09:10:14.956932] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.082 [2024-11-20 09:10:14.956939] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.082 [2024-11-20 09:10:14.956945] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.082 [2024-11-20 09:10:14.956964] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.082 qpair failed and we were unable to recover it. 00:25:59.082 [2024-11-20 09:10:14.966898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.082 [2024-11-20 09:10:14.966961] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.082 [2024-11-20 09:10:14.966974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.082 [2024-11-20 09:10:14.966981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.082 [2024-11-20 09:10:14.966987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.082 [2024-11-20 09:10:14.967001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.083 qpair failed and we were unable to recover it. 00:25:59.083 [2024-11-20 09:10:14.976912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.083 [2024-11-20 09:10:14.976982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.083 [2024-11-20 09:10:14.976996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.083 [2024-11-20 09:10:14.977003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.083 [2024-11-20 09:10:14.977008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.083 [2024-11-20 09:10:14.977024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.083 qpair failed and we were unable to recover it. 00:25:59.083 [2024-11-20 09:10:14.986953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.083 [2024-11-20 09:10:14.987044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.083 [2024-11-20 09:10:14.987057] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.083 [2024-11-20 09:10:14.987064] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.083 [2024-11-20 09:10:14.987070] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.083 [2024-11-20 09:10:14.987085] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.083 qpair failed and we were unable to recover it. 00:25:59.083 [2024-11-20 09:10:14.996915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.083 [2024-11-20 09:10:14.997004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.083 [2024-11-20 09:10:14.997017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.083 [2024-11-20 09:10:14.997024] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.083 [2024-11-20 09:10:14.997030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.083 [2024-11-20 09:10:14.997045] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.083 qpair failed and we were unable to recover it. 00:25:59.083 [2024-11-20 09:10:15.007058] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.083 [2024-11-20 09:10:15.007122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.083 [2024-11-20 09:10:15.007136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.083 [2024-11-20 09:10:15.007143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.083 [2024-11-20 09:10:15.007149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.083 [2024-11-20 09:10:15.007163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.083 qpair failed and we were unable to recover it. 00:25:59.083 [2024-11-20 09:10:15.017024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.083 [2024-11-20 09:10:15.017081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.083 [2024-11-20 09:10:15.017095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.083 [2024-11-20 09:10:15.017102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.083 [2024-11-20 09:10:15.017108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.083 [2024-11-20 09:10:15.017123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.083 qpair failed and we were unable to recover it. 00:25:59.083 [2024-11-20 09:10:15.027054] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.083 [2024-11-20 09:10:15.027124] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.083 [2024-11-20 09:10:15.027137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.083 [2024-11-20 09:10:15.027147] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.083 [2024-11-20 09:10:15.027153] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.083 [2024-11-20 09:10:15.027167] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.083 qpair failed and we were unable to recover it. 00:25:59.083 [2024-11-20 09:10:15.037142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.083 [2024-11-20 09:10:15.037199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.083 [2024-11-20 09:10:15.037212] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.083 [2024-11-20 09:10:15.037218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.083 [2024-11-20 09:10:15.037224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.083 [2024-11-20 09:10:15.037238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.083 qpair failed and we were unable to recover it. 00:25:59.083 [2024-11-20 09:10:15.047055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.083 [2024-11-20 09:10:15.047122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.083 [2024-11-20 09:10:15.047136] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.083 [2024-11-20 09:10:15.047142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.083 [2024-11-20 09:10:15.047148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.083 [2024-11-20 09:10:15.047162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.083 qpair failed and we were unable to recover it. 00:25:59.083 [2024-11-20 09:10:15.057150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.083 [2024-11-20 09:10:15.057204] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.083 [2024-11-20 09:10:15.057218] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.083 [2024-11-20 09:10:15.057224] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.083 [2024-11-20 09:10:15.057230] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.083 [2024-11-20 09:10:15.057245] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.083 qpair failed and we were unable to recover it. 00:25:59.083 [2024-11-20 09:10:15.067167] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.083 [2024-11-20 09:10:15.067231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.083 [2024-11-20 09:10:15.067245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.083 [2024-11-20 09:10:15.067251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.083 [2024-11-20 09:10:15.067258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.083 [2024-11-20 09:10:15.067278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.083 qpair failed and we were unable to recover it. 00:25:59.083 [2024-11-20 09:10:15.077198] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.083 [2024-11-20 09:10:15.077292] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.083 [2024-11-20 09:10:15.077305] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.083 [2024-11-20 09:10:15.077311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.083 [2024-11-20 09:10:15.077317] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.083 [2024-11-20 09:10:15.077331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.083 qpair failed and we were unable to recover it. 00:25:59.083 [2024-11-20 09:10:15.087229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.083 [2024-11-20 09:10:15.087287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.083 [2024-11-20 09:10:15.087301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.083 [2024-11-20 09:10:15.087307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.083 [2024-11-20 09:10:15.087313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.083 [2024-11-20 09:10:15.087328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.083 qpair failed and we were unable to recover it. 00:25:59.083 [2024-11-20 09:10:15.097319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.083 [2024-11-20 09:10:15.097428] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.083 [2024-11-20 09:10:15.097442] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.083 [2024-11-20 09:10:15.097448] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.083 [2024-11-20 09:10:15.097454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.084 [2024-11-20 09:10:15.097469] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.084 qpair failed and we were unable to recover it. 00:25:59.084 [2024-11-20 09:10:15.107260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.084 [2024-11-20 09:10:15.107313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.084 [2024-11-20 09:10:15.107327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.084 [2024-11-20 09:10:15.107333] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.084 [2024-11-20 09:10:15.107339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.084 [2024-11-20 09:10:15.107353] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.084 qpair failed and we were unable to recover it. 00:25:59.084 [2024-11-20 09:10:15.117281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.084 [2024-11-20 09:10:15.117342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.084 [2024-11-20 09:10:15.117357] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.084 [2024-11-20 09:10:15.117363] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.084 [2024-11-20 09:10:15.117369] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.084 [2024-11-20 09:10:15.117384] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.084 qpair failed and we were unable to recover it. 00:25:59.343 [2024-11-20 09:10:15.127346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.343 [2024-11-20 09:10:15.127450] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.343 [2024-11-20 09:10:15.127465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.343 [2024-11-20 09:10:15.127472] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.343 [2024-11-20 09:10:15.127478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.343 [2024-11-20 09:10:15.127493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.343 qpair failed and we were unable to recover it. 00:25:59.343 [2024-11-20 09:10:15.137366] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.343 [2024-11-20 09:10:15.137420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.343 [2024-11-20 09:10:15.137433] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.343 [2024-11-20 09:10:15.137440] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.343 [2024-11-20 09:10:15.137446] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.343 [2024-11-20 09:10:15.137461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.343 qpair failed and we were unable to recover it. 00:25:59.343 [2024-11-20 09:10:15.147457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.343 [2024-11-20 09:10:15.147513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.343 [2024-11-20 09:10:15.147527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.343 [2024-11-20 09:10:15.147534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.343 [2024-11-20 09:10:15.147539] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.343 [2024-11-20 09:10:15.147554] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.343 qpair failed and we were unable to recover it. 00:25:59.343 [2024-11-20 09:10:15.157433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.343 [2024-11-20 09:10:15.157493] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.343 [2024-11-20 09:10:15.157511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.343 [2024-11-20 09:10:15.157518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.343 [2024-11-20 09:10:15.157524] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.343 [2024-11-20 09:10:15.157539] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.343 qpair failed and we were unable to recover it. 00:25:59.343 [2024-11-20 09:10:15.167444] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.343 [2024-11-20 09:10:15.167502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.343 [2024-11-20 09:10:15.167516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.343 [2024-11-20 09:10:15.167522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.343 [2024-11-20 09:10:15.167528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.343 [2024-11-20 09:10:15.167544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.343 qpair failed and we were unable to recover it. 00:25:59.343 [2024-11-20 09:10:15.177477] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.343 [2024-11-20 09:10:15.177572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.343 [2024-11-20 09:10:15.177586] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.343 [2024-11-20 09:10:15.177593] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.343 [2024-11-20 09:10:15.177599] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.343 [2024-11-20 09:10:15.177614] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.343 qpair failed and we were unable to recover it. 00:25:59.343 [2024-11-20 09:10:15.187475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.343 [2024-11-20 09:10:15.187530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.343 [2024-11-20 09:10:15.187543] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.343 [2024-11-20 09:10:15.187549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.343 [2024-11-20 09:10:15.187556] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.343 [2024-11-20 09:10:15.187570] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.343 qpair failed and we were unable to recover it. 00:25:59.343 [2024-11-20 09:10:15.197491] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.344 [2024-11-20 09:10:15.197549] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.344 [2024-11-20 09:10:15.197563] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.344 [2024-11-20 09:10:15.197569] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.344 [2024-11-20 09:10:15.197578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.344 [2024-11-20 09:10:15.197594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.344 qpair failed and we were unable to recover it. 00:25:59.344 [2024-11-20 09:10:15.207571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.344 [2024-11-20 09:10:15.207628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.344 [2024-11-20 09:10:15.207642] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.344 [2024-11-20 09:10:15.207648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.344 [2024-11-20 09:10:15.207654] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.344 [2024-11-20 09:10:15.207669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.344 qpair failed and we were unable to recover it. 00:25:59.344 [2024-11-20 09:10:15.217618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.344 [2024-11-20 09:10:15.217697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.344 [2024-11-20 09:10:15.217712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.344 [2024-11-20 09:10:15.217718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.344 [2024-11-20 09:10:15.217725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.344 [2024-11-20 09:10:15.217740] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.344 qpair failed and we were unable to recover it. 00:25:59.344 [2024-11-20 09:10:15.227629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.344 [2024-11-20 09:10:15.227681] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.344 [2024-11-20 09:10:15.227694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.344 [2024-11-20 09:10:15.227700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.344 [2024-11-20 09:10:15.227707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.344 [2024-11-20 09:10:15.227721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.344 qpair failed and we were unable to recover it. 00:25:59.344 [2024-11-20 09:10:15.237629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.344 [2024-11-20 09:10:15.237689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.344 [2024-11-20 09:10:15.237702] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.344 [2024-11-20 09:10:15.237709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.344 [2024-11-20 09:10:15.237715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.344 [2024-11-20 09:10:15.237729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.344 qpair failed and we were unable to recover it. 00:25:59.344 [2024-11-20 09:10:15.247699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.344 [2024-11-20 09:10:15.247753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.344 [2024-11-20 09:10:15.247767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.344 [2024-11-20 09:10:15.247774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.344 [2024-11-20 09:10:15.247780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.344 [2024-11-20 09:10:15.247795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.344 qpair failed and we were unable to recover it. 00:25:59.344 [2024-11-20 09:10:15.257742] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.344 [2024-11-20 09:10:15.257800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.344 [2024-11-20 09:10:15.257814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.344 [2024-11-20 09:10:15.257821] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.344 [2024-11-20 09:10:15.257827] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.344 [2024-11-20 09:10:15.257842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.344 qpair failed and we were unable to recover it. 00:25:59.344 [2024-11-20 09:10:15.267766] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.344 [2024-11-20 09:10:15.267866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.344 [2024-11-20 09:10:15.267882] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.344 [2024-11-20 09:10:15.267888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.344 [2024-11-20 09:10:15.267895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.344 [2024-11-20 09:10:15.267909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.344 qpair failed and we were unable to recover it. 00:25:59.344 [2024-11-20 09:10:15.277784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.344 [2024-11-20 09:10:15.277862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.344 [2024-11-20 09:10:15.277876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.344 [2024-11-20 09:10:15.277883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.344 [2024-11-20 09:10:15.277889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.344 [2024-11-20 09:10:15.277903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.344 qpair failed and we were unable to recover it. 00:25:59.344 [2024-11-20 09:10:15.287730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.344 [2024-11-20 09:10:15.287807] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.344 [2024-11-20 09:10:15.287823] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.344 [2024-11-20 09:10:15.287830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.344 [2024-11-20 09:10:15.287836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.344 [2024-11-20 09:10:15.287851] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.344 qpair failed and we were unable to recover it. 00:25:59.344 [2024-11-20 09:10:15.297816] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.344 [2024-11-20 09:10:15.297866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.344 [2024-11-20 09:10:15.297880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.344 [2024-11-20 09:10:15.297887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.344 [2024-11-20 09:10:15.297893] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.344 [2024-11-20 09:10:15.297908] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.344 qpair failed and we were unable to recover it. 00:25:59.344 [2024-11-20 09:10:15.307821] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.344 [2024-11-20 09:10:15.307908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.344 [2024-11-20 09:10:15.307921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.344 [2024-11-20 09:10:15.307927] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.344 [2024-11-20 09:10:15.307933] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.344 [2024-11-20 09:10:15.307951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.344 qpair failed and we were unable to recover it. 00:25:59.344 [2024-11-20 09:10:15.317826] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.344 [2024-11-20 09:10:15.317905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.344 [2024-11-20 09:10:15.317918] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.344 [2024-11-20 09:10:15.317924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.344 [2024-11-20 09:10:15.317930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.344 [2024-11-20 09:10:15.317945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.344 qpair failed and we were unable to recover it. 00:25:59.344 [2024-11-20 09:10:15.327938] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.345 [2024-11-20 09:10:15.328023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.345 [2024-11-20 09:10:15.328037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.345 [2024-11-20 09:10:15.328043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.345 [2024-11-20 09:10:15.328052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.345 [2024-11-20 09:10:15.328067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.345 qpair failed and we were unable to recover it. 00:25:59.345 [2024-11-20 09:10:15.338018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.345 [2024-11-20 09:10:15.338100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.345 [2024-11-20 09:10:15.338113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.345 [2024-11-20 09:10:15.338120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.345 [2024-11-20 09:10:15.338126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.345 [2024-11-20 09:10:15.338141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.345 qpair failed and we were unable to recover it. 00:25:59.345 [2024-11-20 09:10:15.347924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.345 [2024-11-20 09:10:15.347986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.345 [2024-11-20 09:10:15.348000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.345 [2024-11-20 09:10:15.348007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.345 [2024-11-20 09:10:15.348013] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.345 [2024-11-20 09:10:15.348028] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.345 qpair failed and we were unable to recover it. 00:25:59.345 [2024-11-20 09:10:15.358004] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.345 [2024-11-20 09:10:15.358105] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.345 [2024-11-20 09:10:15.358118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.345 [2024-11-20 09:10:15.358124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.345 [2024-11-20 09:10:15.358131] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.345 [2024-11-20 09:10:15.358145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.345 qpair failed and we were unable to recover it. 00:25:59.345 [2024-11-20 09:10:15.368013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.345 [2024-11-20 09:10:15.368071] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.345 [2024-11-20 09:10:15.368084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.345 [2024-11-20 09:10:15.368091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.345 [2024-11-20 09:10:15.368097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.345 [2024-11-20 09:10:15.368112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.345 qpair failed and we were unable to recover it. 00:25:59.345 [2024-11-20 09:10:15.378031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.345 [2024-11-20 09:10:15.378085] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.345 [2024-11-20 09:10:15.378099] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.345 [2024-11-20 09:10:15.378106] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.345 [2024-11-20 09:10:15.378112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.345 [2024-11-20 09:10:15.378127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.345 qpair failed and we were unable to recover it. 00:25:59.604 [2024-11-20 09:10:15.388051] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.604 [2024-11-20 09:10:15.388134] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.604 [2024-11-20 09:10:15.388148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.604 [2024-11-20 09:10:15.388155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.604 [2024-11-20 09:10:15.388161] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.604 [2024-11-20 09:10:15.388176] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.604 qpair failed and we were unable to recover it. 00:25:59.604 [2024-11-20 09:10:15.398219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.604 [2024-11-20 09:10:15.398288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.604 [2024-11-20 09:10:15.398301] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.604 [2024-11-20 09:10:15.398307] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.604 [2024-11-20 09:10:15.398313] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.604 [2024-11-20 09:10:15.398328] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.604 qpair failed and we were unable to recover it. 00:25:59.604 [2024-11-20 09:10:15.408220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.604 [2024-11-20 09:10:15.408275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.604 [2024-11-20 09:10:15.408288] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.604 [2024-11-20 09:10:15.408295] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.604 [2024-11-20 09:10:15.408301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.604 [2024-11-20 09:10:15.408315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.604 qpair failed and we were unable to recover it. 00:25:59.604 [2024-11-20 09:10:15.418205] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.604 [2024-11-20 09:10:15.418262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.604 [2024-11-20 09:10:15.418279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.604 [2024-11-20 09:10:15.418286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.604 [2024-11-20 09:10:15.418292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.604 [2024-11-20 09:10:15.418307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.605 qpair failed and we were unable to recover it. 00:25:59.605 [2024-11-20 09:10:15.428239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.605 [2024-11-20 09:10:15.428293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.605 [2024-11-20 09:10:15.428306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.605 [2024-11-20 09:10:15.428313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.605 [2024-11-20 09:10:15.428319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.605 [2024-11-20 09:10:15.428334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.605 qpair failed and we were unable to recover it. 00:25:59.605 [2024-11-20 09:10:15.438265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.605 [2024-11-20 09:10:15.438321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.605 [2024-11-20 09:10:15.438334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.605 [2024-11-20 09:10:15.438341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.605 [2024-11-20 09:10:15.438347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.605 [2024-11-20 09:10:15.438362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.605 qpair failed and we were unable to recover it. 00:25:59.605 [2024-11-20 09:10:15.448196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.605 [2024-11-20 09:10:15.448256] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.605 [2024-11-20 09:10:15.448270] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.605 [2024-11-20 09:10:15.448277] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.605 [2024-11-20 09:10:15.448283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.605 [2024-11-20 09:10:15.448298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.605 qpair failed and we were unable to recover it. 00:25:59.605 [2024-11-20 09:10:15.458206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.605 [2024-11-20 09:10:15.458259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.605 [2024-11-20 09:10:15.458273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.605 [2024-11-20 09:10:15.458283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.605 [2024-11-20 09:10:15.458289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.605 [2024-11-20 09:10:15.458304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.605 qpair failed and we were unable to recover it. 00:25:59.605 [2024-11-20 09:10:15.468342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.605 [2024-11-20 09:10:15.468396] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.605 [2024-11-20 09:10:15.468410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.605 [2024-11-20 09:10:15.468416] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.605 [2024-11-20 09:10:15.468422] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.605 [2024-11-20 09:10:15.468437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.605 qpair failed and we were unable to recover it. 00:25:59.605 [2024-11-20 09:10:15.478292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.605 [2024-11-20 09:10:15.478345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.605 [2024-11-20 09:10:15.478359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.605 [2024-11-20 09:10:15.478365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.605 [2024-11-20 09:10:15.478371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.605 [2024-11-20 09:10:15.478386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.605 qpair failed and we were unable to recover it. 00:25:59.605 [2024-11-20 09:10:15.488300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.605 [2024-11-20 09:10:15.488361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.605 [2024-11-20 09:10:15.488373] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.605 [2024-11-20 09:10:15.488380] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.605 [2024-11-20 09:10:15.488386] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.605 [2024-11-20 09:10:15.488401] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.605 qpair failed and we were unable to recover it. 00:25:59.605 [2024-11-20 09:10:15.498339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.605 [2024-11-20 09:10:15.498419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.605 [2024-11-20 09:10:15.498432] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.605 [2024-11-20 09:10:15.498439] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.605 [2024-11-20 09:10:15.498445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.605 [2024-11-20 09:10:15.498463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.605 qpair failed and we were unable to recover it. 00:25:59.605 [2024-11-20 09:10:15.508400] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.605 [2024-11-20 09:10:15.508457] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.605 [2024-11-20 09:10:15.508470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.605 [2024-11-20 09:10:15.508476] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.605 [2024-11-20 09:10:15.508483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.605 [2024-11-20 09:10:15.508498] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.605 qpair failed and we were unable to recover it. 00:25:59.605 [2024-11-20 09:10:15.518392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.605 [2024-11-20 09:10:15.518452] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.605 [2024-11-20 09:10:15.518465] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.605 [2024-11-20 09:10:15.518471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.605 [2024-11-20 09:10:15.518477] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.605 [2024-11-20 09:10:15.518492] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.605 qpair failed and we were unable to recover it. 00:25:59.605 [2024-11-20 09:10:15.528456] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.605 [2024-11-20 09:10:15.528523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.605 [2024-11-20 09:10:15.528536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.605 [2024-11-20 09:10:15.528543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.605 [2024-11-20 09:10:15.528548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.605 [2024-11-20 09:10:15.528563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.605 qpair failed and we were unable to recover it. 00:25:59.605 [2024-11-20 09:10:15.538436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.605 [2024-11-20 09:10:15.538509] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.605 [2024-11-20 09:10:15.538523] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.605 [2024-11-20 09:10:15.538530] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.605 [2024-11-20 09:10:15.538535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.605 [2024-11-20 09:10:15.538550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.605 qpair failed and we were unable to recover it. 00:25:59.605 [2024-11-20 09:10:15.548585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.605 [2024-11-20 09:10:15.548654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.605 [2024-11-20 09:10:15.548668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.605 [2024-11-20 09:10:15.548674] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.605 [2024-11-20 09:10:15.548680] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.606 [2024-11-20 09:10:15.548695] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.606 qpair failed and we were unable to recover it. 00:25:59.606 [2024-11-20 09:10:15.558500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.606 [2024-11-20 09:10:15.558557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.606 [2024-11-20 09:10:15.558570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.606 [2024-11-20 09:10:15.558576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.606 [2024-11-20 09:10:15.558582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.606 [2024-11-20 09:10:15.558597] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.606 qpair failed and we were unable to recover it. 00:25:59.606 [2024-11-20 09:10:15.568525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.606 [2024-11-20 09:10:15.568605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.606 [2024-11-20 09:10:15.568617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.606 [2024-11-20 09:10:15.568624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.606 [2024-11-20 09:10:15.568630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.606 [2024-11-20 09:10:15.568645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.606 qpair failed and we were unable to recover it. 00:25:59.606 [2024-11-20 09:10:15.578618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.606 [2024-11-20 09:10:15.578670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.606 [2024-11-20 09:10:15.578683] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.606 [2024-11-20 09:10:15.578690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.606 [2024-11-20 09:10:15.578696] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.606 [2024-11-20 09:10:15.578710] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.606 qpair failed and we were unable to recover it. 00:25:59.606 [2024-11-20 09:10:15.588658] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.606 [2024-11-20 09:10:15.588726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.606 [2024-11-20 09:10:15.588739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.606 [2024-11-20 09:10:15.588750] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.606 [2024-11-20 09:10:15.588756] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.606 [2024-11-20 09:10:15.588771] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.606 qpair failed and we were unable to recover it. 00:25:59.606 [2024-11-20 09:10:15.598668] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.606 [2024-11-20 09:10:15.598726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.606 [2024-11-20 09:10:15.598739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.606 [2024-11-20 09:10:15.598746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.606 [2024-11-20 09:10:15.598752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.606 [2024-11-20 09:10:15.598766] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.606 qpair failed and we were unable to recover it. 00:25:59.606 [2024-11-20 09:10:15.608699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.606 [2024-11-20 09:10:15.608754] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.606 [2024-11-20 09:10:15.608767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.606 [2024-11-20 09:10:15.608774] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.606 [2024-11-20 09:10:15.608780] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.606 [2024-11-20 09:10:15.608794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.606 qpair failed and we were unable to recover it. 00:25:59.606 [2024-11-20 09:10:15.618728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.606 [2024-11-20 09:10:15.618783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.606 [2024-11-20 09:10:15.618797] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.606 [2024-11-20 09:10:15.618803] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.606 [2024-11-20 09:10:15.618809] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.606 [2024-11-20 09:10:15.618824] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.606 qpair failed and we were unable to recover it. 00:25:59.606 [2024-11-20 09:10:15.628757] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.606 [2024-11-20 09:10:15.628810] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.606 [2024-11-20 09:10:15.628824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.606 [2024-11-20 09:10:15.628830] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.606 [2024-11-20 09:10:15.628836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.606 [2024-11-20 09:10:15.628854] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.606 qpair failed and we were unable to recover it. 00:25:59.606 [2024-11-20 09:10:15.638818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.606 [2024-11-20 09:10:15.638880] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.606 [2024-11-20 09:10:15.638894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.606 [2024-11-20 09:10:15.638900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.606 [2024-11-20 09:10:15.638906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.606 [2024-11-20 09:10:15.638921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.606 qpair failed and we were unable to recover it. 00:25:59.866 [2024-11-20 09:10:15.648890] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.866 [2024-11-20 09:10:15.648957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.866 [2024-11-20 09:10:15.648972] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.866 [2024-11-20 09:10:15.648979] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.866 [2024-11-20 09:10:15.648985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.866 [2024-11-20 09:10:15.649000] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.866 qpair failed and we were unable to recover it. 00:25:59.866 [2024-11-20 09:10:15.658885] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.866 [2024-11-20 09:10:15.658940] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.866 [2024-11-20 09:10:15.658958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.866 [2024-11-20 09:10:15.658965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.866 [2024-11-20 09:10:15.658971] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.866 [2024-11-20 09:10:15.658987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.866 qpair failed and we were unable to recover it. 00:25:59.866 [2024-11-20 09:10:15.668875] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.866 [2024-11-20 09:10:15.668929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.866 [2024-11-20 09:10:15.668942] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.866 [2024-11-20 09:10:15.668952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.866 [2024-11-20 09:10:15.668959] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.866 [2024-11-20 09:10:15.668974] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.866 qpair failed and we were unable to recover it. 00:25:59.866 [2024-11-20 09:10:15.678941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.866 [2024-11-20 09:10:15.679008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.866 [2024-11-20 09:10:15.679021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.866 [2024-11-20 09:10:15.679028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.866 [2024-11-20 09:10:15.679034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.866 [2024-11-20 09:10:15.679049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.866 qpair failed and we were unable to recover it. 00:25:59.866 [2024-11-20 09:10:15.688928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.866 [2024-11-20 09:10:15.688992] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.866 [2024-11-20 09:10:15.689005] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.866 [2024-11-20 09:10:15.689011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.866 [2024-11-20 09:10:15.689017] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.866 [2024-11-20 09:10:15.689032] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.866 qpair failed and we were unable to recover it. 00:25:59.866 [2024-11-20 09:10:15.698998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.866 [2024-11-20 09:10:15.699050] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.866 [2024-11-20 09:10:15.699064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.866 [2024-11-20 09:10:15.699070] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.866 [2024-11-20 09:10:15.699076] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.866 [2024-11-20 09:10:15.699091] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.866 qpair failed and we were unable to recover it. 00:25:59.866 [2024-11-20 09:10:15.708989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.866 [2024-11-20 09:10:15.709047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.866 [2024-11-20 09:10:15.709061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.866 [2024-11-20 09:10:15.709067] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.866 [2024-11-20 09:10:15.709073] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.866 [2024-11-20 09:10:15.709088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.866 qpair failed and we were unable to recover it. 00:25:59.866 [2024-11-20 09:10:15.719059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.866 [2024-11-20 09:10:15.719119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.867 [2024-11-20 09:10:15.719135] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.867 [2024-11-20 09:10:15.719142] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.867 [2024-11-20 09:10:15.719147] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.867 [2024-11-20 09:10:15.719162] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.867 qpair failed and we were unable to recover it. 00:25:59.867 [2024-11-20 09:10:15.729060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.867 [2024-11-20 09:10:15.729118] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.867 [2024-11-20 09:10:15.729133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.867 [2024-11-20 09:10:15.729139] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.867 [2024-11-20 09:10:15.729145] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.867 [2024-11-20 09:10:15.729160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.867 qpair failed and we were unable to recover it. 00:25:59.867 [2024-11-20 09:10:15.739132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.867 [2024-11-20 09:10:15.739191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.867 [2024-11-20 09:10:15.739205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.867 [2024-11-20 09:10:15.739211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.867 [2024-11-20 09:10:15.739217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.867 [2024-11-20 09:10:15.739232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.867 qpair failed and we were unable to recover it. 00:25:59.867 [2024-11-20 09:10:15.749117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.867 [2024-11-20 09:10:15.749170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.867 [2024-11-20 09:10:15.749184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.867 [2024-11-20 09:10:15.749190] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.867 [2024-11-20 09:10:15.749196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.867 [2024-11-20 09:10:15.749211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.867 qpair failed and we were unable to recover it. 00:25:59.867 [2024-11-20 09:10:15.759066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.867 [2024-11-20 09:10:15.759125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.867 [2024-11-20 09:10:15.759139] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.867 [2024-11-20 09:10:15.759145] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.867 [2024-11-20 09:10:15.759156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.867 [2024-11-20 09:10:15.759171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.867 qpair failed and we were unable to recover it. 00:25:59.867 [2024-11-20 09:10:15.769218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.867 [2024-11-20 09:10:15.769279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.867 [2024-11-20 09:10:15.769292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.867 [2024-11-20 09:10:15.769299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.867 [2024-11-20 09:10:15.769305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.867 [2024-11-20 09:10:15.769319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.867 qpair failed and we were unable to recover it. 00:25:59.867 [2024-11-20 09:10:15.779194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.867 [2024-11-20 09:10:15.779266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.867 [2024-11-20 09:10:15.779279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.867 [2024-11-20 09:10:15.779286] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.867 [2024-11-20 09:10:15.779292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.867 [2024-11-20 09:10:15.779306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.867 qpair failed and we were unable to recover it. 00:25:59.867 [2024-11-20 09:10:15.789214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.867 [2024-11-20 09:10:15.789267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.867 [2024-11-20 09:10:15.789280] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.867 [2024-11-20 09:10:15.789287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.867 [2024-11-20 09:10:15.789292] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.867 [2024-11-20 09:10:15.789307] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.867 qpair failed and we were unable to recover it. 00:25:59.867 [2024-11-20 09:10:15.799255] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.867 [2024-11-20 09:10:15.799349] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.867 [2024-11-20 09:10:15.799363] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.867 [2024-11-20 09:10:15.799369] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.867 [2024-11-20 09:10:15.799375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.867 [2024-11-20 09:10:15.799390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.867 qpair failed and we were unable to recover it. 00:25:59.867 [2024-11-20 09:10:15.809283] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.867 [2024-11-20 09:10:15.809341] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.867 [2024-11-20 09:10:15.809355] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.867 [2024-11-20 09:10:15.809361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.867 [2024-11-20 09:10:15.809367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.867 [2024-11-20 09:10:15.809382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.867 qpair failed and we were unable to recover it. 00:25:59.867 [2024-11-20 09:10:15.819347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.867 [2024-11-20 09:10:15.819401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.867 [2024-11-20 09:10:15.819414] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.867 [2024-11-20 09:10:15.819420] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.867 [2024-11-20 09:10:15.819426] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.867 [2024-11-20 09:10:15.819441] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.867 qpair failed and we were unable to recover it. 00:25:59.867 [2024-11-20 09:10:15.829317] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.867 [2024-11-20 09:10:15.829408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.867 [2024-11-20 09:10:15.829421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.867 [2024-11-20 09:10:15.829427] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.867 [2024-11-20 09:10:15.829433] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.867 [2024-11-20 09:10:15.829448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.867 qpair failed and we were unable to recover it. 00:25:59.867 [2024-11-20 09:10:15.839402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.867 [2024-11-20 09:10:15.839459] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.867 [2024-11-20 09:10:15.839472] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.867 [2024-11-20 09:10:15.839479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.867 [2024-11-20 09:10:15.839485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.867 [2024-11-20 09:10:15.839500] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.867 qpair failed and we were unable to recover it. 00:25:59.867 [2024-11-20 09:10:15.849403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.868 [2024-11-20 09:10:15.849454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.868 [2024-11-20 09:10:15.849470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.868 [2024-11-20 09:10:15.849477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.868 [2024-11-20 09:10:15.849483] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.868 [2024-11-20 09:10:15.849497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.868 qpair failed and we were unable to recover it. 00:25:59.868 [2024-11-20 09:10:15.859421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.868 [2024-11-20 09:10:15.859478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.868 [2024-11-20 09:10:15.859491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.868 [2024-11-20 09:10:15.859498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.868 [2024-11-20 09:10:15.859504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.868 [2024-11-20 09:10:15.859519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.868 qpair failed and we were unable to recover it. 00:25:59.868 [2024-11-20 09:10:15.869460] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.868 [2024-11-20 09:10:15.869517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.868 [2024-11-20 09:10:15.869530] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.868 [2024-11-20 09:10:15.869536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.868 [2024-11-20 09:10:15.869542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.868 [2024-11-20 09:10:15.869557] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.868 qpair failed and we were unable to recover it. 00:25:59.868 [2024-11-20 09:10:15.879493] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.868 [2024-11-20 09:10:15.879566] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.868 [2024-11-20 09:10:15.879580] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.868 [2024-11-20 09:10:15.879586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.868 [2024-11-20 09:10:15.879592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.868 [2024-11-20 09:10:15.879607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.868 qpair failed and we were unable to recover it. 00:25:59.868 [2024-11-20 09:10:15.889546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.868 [2024-11-20 09:10:15.889610] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.868 [2024-11-20 09:10:15.889622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.868 [2024-11-20 09:10:15.889629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.868 [2024-11-20 09:10:15.889638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.868 [2024-11-20 09:10:15.889652] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.868 qpair failed and we were unable to recover it. 00:25:59.868 [2024-11-20 09:10:15.899605] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:59.868 [2024-11-20 09:10:15.899709] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:59.868 [2024-11-20 09:10:15.899723] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:59.868 [2024-11-20 09:10:15.899730] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:59.868 [2024-11-20 09:10:15.899736] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:25:59.868 [2024-11-20 09:10:15.899752] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:25:59.868 qpair failed and we were unable to recover it. 00:26:00.128 [2024-11-20 09:10:15.909609] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.128 [2024-11-20 09:10:15.909671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.128 [2024-11-20 09:10:15.909686] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.128 [2024-11-20 09:10:15.909693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.128 [2024-11-20 09:10:15.909699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.128 [2024-11-20 09:10:15.909714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.128 qpair failed and we were unable to recover it. 00:26:00.128 [2024-11-20 09:10:15.919624] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.128 [2024-11-20 09:10:15.919684] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.128 [2024-11-20 09:10:15.919698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.128 [2024-11-20 09:10:15.919705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.128 [2024-11-20 09:10:15.919711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.128 [2024-11-20 09:10:15.919726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.128 qpair failed and we were unable to recover it. 00:26:00.128 [2024-11-20 09:10:15.929688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.128 [2024-11-20 09:10:15.929742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.128 [2024-11-20 09:10:15.929755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.128 [2024-11-20 09:10:15.929761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.128 [2024-11-20 09:10:15.929767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.128 [2024-11-20 09:10:15.929783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.128 qpair failed and we were unable to recover it. 00:26:00.128 [2024-11-20 09:10:15.939657] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.128 [2024-11-20 09:10:15.939717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.128 [2024-11-20 09:10:15.939731] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.128 [2024-11-20 09:10:15.939738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.128 [2024-11-20 09:10:15.939744] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.128 [2024-11-20 09:10:15.939758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.128 qpair failed and we were unable to recover it. 00:26:00.128 [2024-11-20 09:10:15.949685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.128 [2024-11-20 09:10:15.949742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.128 [2024-11-20 09:10:15.949756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.128 [2024-11-20 09:10:15.949762] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.128 [2024-11-20 09:10:15.949768] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.128 [2024-11-20 09:10:15.949783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.128 qpair failed and we were unable to recover it. 00:26:00.128 [2024-11-20 09:10:15.959716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.128 [2024-11-20 09:10:15.959775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.128 [2024-11-20 09:10:15.959789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.128 [2024-11-20 09:10:15.959796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.128 [2024-11-20 09:10:15.959802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.128 [2024-11-20 09:10:15.959817] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.128 qpair failed and we were unable to recover it. 00:26:00.128 [2024-11-20 09:10:15.969747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.128 [2024-11-20 09:10:15.969806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.128 [2024-11-20 09:10:15.969820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.128 [2024-11-20 09:10:15.969827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.128 [2024-11-20 09:10:15.969833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.128 [2024-11-20 09:10:15.969847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.128 qpair failed and we were unable to recover it. 00:26:00.128 [2024-11-20 09:10:15.979778] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.128 [2024-11-20 09:10:15.979831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.128 [2024-11-20 09:10:15.979848] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.129 [2024-11-20 09:10:15.979854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.129 [2024-11-20 09:10:15.979860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.129 [2024-11-20 09:10:15.979874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.129 qpair failed and we were unable to recover it. 00:26:00.129 [2024-11-20 09:10:15.989796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.129 [2024-11-20 09:10:15.989851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.129 [2024-11-20 09:10:15.989864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.129 [2024-11-20 09:10:15.989870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.129 [2024-11-20 09:10:15.989876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.129 [2024-11-20 09:10:15.989891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.129 qpair failed and we were unable to recover it. 00:26:00.129 [2024-11-20 09:10:15.999869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.129 [2024-11-20 09:10:15.999972] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.129 [2024-11-20 09:10:15.999985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.129 [2024-11-20 09:10:15.999992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.129 [2024-11-20 09:10:15.999998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.129 [2024-11-20 09:10:16.000012] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.129 qpair failed and we were unable to recover it. 00:26:00.129 [2024-11-20 09:10:16.009842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.129 [2024-11-20 09:10:16.009897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.129 [2024-11-20 09:10:16.009910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.129 [2024-11-20 09:10:16.009917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.129 [2024-11-20 09:10:16.009923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.129 [2024-11-20 09:10:16.009937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.129 qpair failed and we were unable to recover it. 00:26:00.129 [2024-11-20 09:10:16.019894] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.129 [2024-11-20 09:10:16.019951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.129 [2024-11-20 09:10:16.019965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.129 [2024-11-20 09:10:16.019975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.129 [2024-11-20 09:10:16.019980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.129 [2024-11-20 09:10:16.019996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.129 qpair failed and we were unable to recover it. 00:26:00.129 [2024-11-20 09:10:16.029904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.129 [2024-11-20 09:10:16.029960] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.129 [2024-11-20 09:10:16.029973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.129 [2024-11-20 09:10:16.029980] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.129 [2024-11-20 09:10:16.029986] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.129 [2024-11-20 09:10:16.030001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.129 qpair failed and we were unable to recover it. 00:26:00.129 [2024-11-20 09:10:16.039936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.129 [2024-11-20 09:10:16.040000] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.129 [2024-11-20 09:10:16.040014] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.129 [2024-11-20 09:10:16.040020] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.129 [2024-11-20 09:10:16.040026] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.129 [2024-11-20 09:10:16.040041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.129 qpair failed and we were unable to recover it. 00:26:00.129 [2024-11-20 09:10:16.049954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.129 [2024-11-20 09:10:16.050011] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.129 [2024-11-20 09:10:16.050024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.129 [2024-11-20 09:10:16.050030] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.129 [2024-11-20 09:10:16.050036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.129 [2024-11-20 09:10:16.050051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.129 qpair failed and we were unable to recover it. 00:26:00.129 [2024-11-20 09:10:16.059989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.129 [2024-11-20 09:10:16.060060] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.129 [2024-11-20 09:10:16.060073] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.129 [2024-11-20 09:10:16.060080] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.129 [2024-11-20 09:10:16.060086] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.129 [2024-11-20 09:10:16.060105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.129 qpair failed and we were unable to recover it. 00:26:00.129 [2024-11-20 09:10:16.070007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.129 [2024-11-20 09:10:16.070063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.129 [2024-11-20 09:10:16.070076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.129 [2024-11-20 09:10:16.070083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.129 [2024-11-20 09:10:16.070089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.129 [2024-11-20 09:10:16.070104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.129 qpair failed and we were unable to recover it. 00:26:00.129 [2024-11-20 09:10:16.080044] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.129 [2024-11-20 09:10:16.080103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.129 [2024-11-20 09:10:16.080115] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.129 [2024-11-20 09:10:16.080122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.129 [2024-11-20 09:10:16.080128] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.129 [2024-11-20 09:10:16.080143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.129 qpair failed and we were unable to recover it. 00:26:00.129 [2024-11-20 09:10:16.090105] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.129 [2024-11-20 09:10:16.090171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.129 [2024-11-20 09:10:16.090184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.129 [2024-11-20 09:10:16.090191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.129 [2024-11-20 09:10:16.090197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.129 [2024-11-20 09:10:16.090212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.129 qpair failed and we were unable to recover it. 00:26:00.129 [2024-11-20 09:10:16.100112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.130 [2024-11-20 09:10:16.100162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.130 [2024-11-20 09:10:16.100175] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.130 [2024-11-20 09:10:16.100182] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.130 [2024-11-20 09:10:16.100188] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.130 [2024-11-20 09:10:16.100203] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.130 qpair failed and we were unable to recover it. 00:26:00.130 [2024-11-20 09:10:16.110133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.130 [2024-11-20 09:10:16.110249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.130 [2024-11-20 09:10:16.110262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.130 [2024-11-20 09:10:16.110269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.130 [2024-11-20 09:10:16.110275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.130 [2024-11-20 09:10:16.110290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.130 qpair failed and we were unable to recover it. 00:26:00.130 [2024-11-20 09:10:16.120176] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.130 [2024-11-20 09:10:16.120233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.130 [2024-11-20 09:10:16.120246] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.130 [2024-11-20 09:10:16.120252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.130 [2024-11-20 09:10:16.120258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.130 [2024-11-20 09:10:16.120273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.130 qpair failed and we were unable to recover it. 00:26:00.130 [2024-11-20 09:10:16.130195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.130 [2024-11-20 09:10:16.130249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.130 [2024-11-20 09:10:16.130262] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.130 [2024-11-20 09:10:16.130268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.130 [2024-11-20 09:10:16.130275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.130 [2024-11-20 09:10:16.130289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.130 qpair failed and we were unable to recover it. 00:26:00.130 [2024-11-20 09:10:16.140171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.130 [2024-11-20 09:10:16.140228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.130 [2024-11-20 09:10:16.140241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.130 [2024-11-20 09:10:16.140247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.130 [2024-11-20 09:10:16.140253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.130 [2024-11-20 09:10:16.140269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.130 qpair failed and we were unable to recover it. 00:26:00.130 [2024-11-20 09:10:16.150262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.130 [2024-11-20 09:10:16.150313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.130 [2024-11-20 09:10:16.150326] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.130 [2024-11-20 09:10:16.150335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.130 [2024-11-20 09:10:16.150341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.130 [2024-11-20 09:10:16.150356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.130 qpair failed and we were unable to recover it. 00:26:00.130 [2024-11-20 09:10:16.160282] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.130 [2024-11-20 09:10:16.160358] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.130 [2024-11-20 09:10:16.160372] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.130 [2024-11-20 09:10:16.160379] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.130 [2024-11-20 09:10:16.160385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.130 [2024-11-20 09:10:16.160399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.130 qpair failed and we were unable to recover it. 00:26:00.390 [2024-11-20 09:10:16.170330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.390 [2024-11-20 09:10:16.170388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.390 [2024-11-20 09:10:16.170403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.390 [2024-11-20 09:10:16.170410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.390 [2024-11-20 09:10:16.170416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.390 [2024-11-20 09:10:16.170434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.390 qpair failed and we were unable to recover it. 00:26:00.390 [2024-11-20 09:10:16.180353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.390 [2024-11-20 09:10:16.180410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.390 [2024-11-20 09:10:16.180425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.390 [2024-11-20 09:10:16.180431] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.390 [2024-11-20 09:10:16.180437] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.390 [2024-11-20 09:10:16.180452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.390 qpair failed and we were unable to recover it. 00:26:00.390 [2024-11-20 09:10:16.190360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.390 [2024-11-20 09:10:16.190417] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.390 [2024-11-20 09:10:16.190431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.390 [2024-11-20 09:10:16.190437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.390 [2024-11-20 09:10:16.190443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.390 [2024-11-20 09:10:16.190461] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.390 qpair failed and we were unable to recover it. 00:26:00.390 [2024-11-20 09:10:16.200443] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.390 [2024-11-20 09:10:16.200538] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.390 [2024-11-20 09:10:16.200551] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.390 [2024-11-20 09:10:16.200557] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.390 [2024-11-20 09:10:16.200563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.390 [2024-11-20 09:10:16.200577] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.390 qpair failed and we were unable to recover it. 00:26:00.390 [2024-11-20 09:10:16.210446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.390 [2024-11-20 09:10:16.210506] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.390 [2024-11-20 09:10:16.210520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.390 [2024-11-20 09:10:16.210526] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.390 [2024-11-20 09:10:16.210532] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.390 [2024-11-20 09:10:16.210547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.390 qpair failed and we were unable to recover it. 00:26:00.390 [2024-11-20 09:10:16.220466] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.390 [2024-11-20 09:10:16.220554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.390 [2024-11-20 09:10:16.220569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.390 [2024-11-20 09:10:16.220575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.390 [2024-11-20 09:10:16.220582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.390 [2024-11-20 09:10:16.220596] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.390 qpair failed and we were unable to recover it. 00:26:00.390 [2024-11-20 09:10:16.230470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.390 [2024-11-20 09:10:16.230525] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.390 [2024-11-20 09:10:16.230538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.390 [2024-11-20 09:10:16.230544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.390 [2024-11-20 09:10:16.230550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.390 [2024-11-20 09:10:16.230565] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.390 qpair failed and we were unable to recover it. 00:26:00.390 [2024-11-20 09:10:16.240514] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.390 [2024-11-20 09:10:16.240571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.390 [2024-11-20 09:10:16.240584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.390 [2024-11-20 09:10:16.240590] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.390 [2024-11-20 09:10:16.240597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.390 [2024-11-20 09:10:16.240611] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.390 qpair failed and we were unable to recover it. 00:26:00.390 [2024-11-20 09:10:16.250548] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.390 [2024-11-20 09:10:16.250604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.390 [2024-11-20 09:10:16.250618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.390 [2024-11-20 09:10:16.250624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.390 [2024-11-20 09:10:16.250630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.390 [2024-11-20 09:10:16.250644] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.390 qpair failed and we were unable to recover it. 00:26:00.390 [2024-11-20 09:10:16.260623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.390 [2024-11-20 09:10:16.260674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.390 [2024-11-20 09:10:16.260688] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.390 [2024-11-20 09:10:16.260694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.390 [2024-11-20 09:10:16.260700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.390 [2024-11-20 09:10:16.260714] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.390 qpair failed and we were unable to recover it. 00:26:00.390 [2024-11-20 09:10:16.270607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.390 [2024-11-20 09:10:16.270659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.390 [2024-11-20 09:10:16.270673] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.390 [2024-11-20 09:10:16.270679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.390 [2024-11-20 09:10:16.270685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.390 [2024-11-20 09:10:16.270700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.390 qpair failed and we were unable to recover it. 00:26:00.390 [2024-11-20 09:10:16.280633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.390 [2024-11-20 09:10:16.280689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.390 [2024-11-20 09:10:16.280706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.391 [2024-11-20 09:10:16.280712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.391 [2024-11-20 09:10:16.280718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.391 [2024-11-20 09:10:16.280733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.391 qpair failed and we were unable to recover it. 00:26:00.391 [2024-11-20 09:10:16.290711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.391 [2024-11-20 09:10:16.290777] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.391 [2024-11-20 09:10:16.290790] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.391 [2024-11-20 09:10:16.290796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.391 [2024-11-20 09:10:16.290802] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.391 [2024-11-20 09:10:16.290818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.391 qpair failed and we were unable to recover it. 00:26:00.391 [2024-11-20 09:10:16.300697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.391 [2024-11-20 09:10:16.300752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.391 [2024-11-20 09:10:16.300765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.391 [2024-11-20 09:10:16.300771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.391 [2024-11-20 09:10:16.300777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.391 [2024-11-20 09:10:16.300791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.391 qpair failed and we were unable to recover it. 00:26:00.391 [2024-11-20 09:10:16.310711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.391 [2024-11-20 09:10:16.310760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.391 [2024-11-20 09:10:16.310775] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.391 [2024-11-20 09:10:16.310781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.391 [2024-11-20 09:10:16.310787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.391 [2024-11-20 09:10:16.310802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.391 qpair failed and we were unable to recover it. 00:26:00.391 [2024-11-20 09:10:16.320750] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.391 [2024-11-20 09:10:16.320857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.391 [2024-11-20 09:10:16.320871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.391 [2024-11-20 09:10:16.320877] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.391 [2024-11-20 09:10:16.320886] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.391 [2024-11-20 09:10:16.320901] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.391 qpair failed and we were unable to recover it. 00:26:00.391 [2024-11-20 09:10:16.330794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.391 [2024-11-20 09:10:16.330850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.391 [2024-11-20 09:10:16.330864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.391 [2024-11-20 09:10:16.330870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.391 [2024-11-20 09:10:16.330876] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.391 [2024-11-20 09:10:16.330891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.391 qpair failed and we were unable to recover it. 00:26:00.391 [2024-11-20 09:10:16.340806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.391 [2024-11-20 09:10:16.340862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.391 [2024-11-20 09:10:16.340876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.391 [2024-11-20 09:10:16.340882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.391 [2024-11-20 09:10:16.340888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.391 [2024-11-20 09:10:16.340903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.391 qpair failed and we were unable to recover it. 00:26:00.391 [2024-11-20 09:10:16.350828] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.391 [2024-11-20 09:10:16.350881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.391 [2024-11-20 09:10:16.350894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.391 [2024-11-20 09:10:16.350900] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.391 [2024-11-20 09:10:16.350906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.391 [2024-11-20 09:10:16.350921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.391 qpair failed and we were unable to recover it. 00:26:00.391 [2024-11-20 09:10:16.360838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.391 [2024-11-20 09:10:16.360896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.391 [2024-11-20 09:10:16.360909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.391 [2024-11-20 09:10:16.360915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.391 [2024-11-20 09:10:16.360921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.391 [2024-11-20 09:10:16.360936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.391 qpair failed and we were unable to recover it. 00:26:00.391 [2024-11-20 09:10:16.370888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.391 [2024-11-20 09:10:16.370941] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.391 [2024-11-20 09:10:16.370958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.391 [2024-11-20 09:10:16.370965] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.391 [2024-11-20 09:10:16.370970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.391 [2024-11-20 09:10:16.370985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.391 qpair failed and we were unable to recover it. 00:26:00.391 [2024-11-20 09:10:16.380914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.391 [2024-11-20 09:10:16.380967] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.391 [2024-11-20 09:10:16.380980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.391 [2024-11-20 09:10:16.380986] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.391 [2024-11-20 09:10:16.380992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.391 [2024-11-20 09:10:16.381007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.391 qpair failed and we were unable to recover it. 00:26:00.391 [2024-11-20 09:10:16.390936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.391 [2024-11-20 09:10:16.390995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.391 [2024-11-20 09:10:16.391009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.391 [2024-11-20 09:10:16.391015] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.391 [2024-11-20 09:10:16.391021] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.391 [2024-11-20 09:10:16.391036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.391 qpair failed and we were unable to recover it. 00:26:00.391 [2024-11-20 09:10:16.400999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.391 [2024-11-20 09:10:16.401059] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.391 [2024-11-20 09:10:16.401072] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.391 [2024-11-20 09:10:16.401079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.391 [2024-11-20 09:10:16.401085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.391 [2024-11-20 09:10:16.401100] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.391 qpair failed and we were unable to recover it. 00:26:00.391 [2024-11-20 09:10:16.411049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.391 [2024-11-20 09:10:16.411144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.391 [2024-11-20 09:10:16.411160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.392 [2024-11-20 09:10:16.411167] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.392 [2024-11-20 09:10:16.411173] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.392 [2024-11-20 09:10:16.411187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.392 qpair failed and we were unable to recover it. 00:26:00.392 [2024-11-20 09:10:16.421057] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.392 [2024-11-20 09:10:16.421114] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.392 [2024-11-20 09:10:16.421127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.392 [2024-11-20 09:10:16.421134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.392 [2024-11-20 09:10:16.421140] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.392 [2024-11-20 09:10:16.421154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.392 qpair failed and we were unable to recover it. 00:26:00.651 [2024-11-20 09:10:16.431097] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.651 [2024-11-20 09:10:16.431157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.651 [2024-11-20 09:10:16.431172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.651 [2024-11-20 09:10:16.431178] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.651 [2024-11-20 09:10:16.431184] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.651 [2024-11-20 09:10:16.431199] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.651 qpair failed and we were unable to recover it. 00:26:00.651 [2024-11-20 09:10:16.441147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.651 [2024-11-20 09:10:16.441216] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.651 [2024-11-20 09:10:16.441230] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.651 [2024-11-20 09:10:16.441237] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.651 [2024-11-20 09:10:16.441243] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.651 [2024-11-20 09:10:16.441259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.651 qpair failed and we were unable to recover it. 00:26:00.651 [2024-11-20 09:10:16.451141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.651 [2024-11-20 09:10:16.451202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.651 [2024-11-20 09:10:16.451215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.651 [2024-11-20 09:10:16.451222] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.651 [2024-11-20 09:10:16.451233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.651 [2024-11-20 09:10:16.451248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.651 qpair failed and we were unable to recover it. 00:26:00.651 [2024-11-20 09:10:16.461164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.651 [2024-11-20 09:10:16.461220] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.651 [2024-11-20 09:10:16.461234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.652 [2024-11-20 09:10:16.461241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.652 [2024-11-20 09:10:16.461247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.652 [2024-11-20 09:10:16.461262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.652 qpair failed and we were unable to recover it. 00:26:00.652 [2024-11-20 09:10:16.471190] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.652 [2024-11-20 09:10:16.471240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.652 [2024-11-20 09:10:16.471253] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.652 [2024-11-20 09:10:16.471260] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.652 [2024-11-20 09:10:16.471266] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.652 [2024-11-20 09:10:16.471281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.652 qpair failed and we were unable to recover it. 00:26:00.652 [2024-11-20 09:10:16.481221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.652 [2024-11-20 09:10:16.481279] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.652 [2024-11-20 09:10:16.481292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.652 [2024-11-20 09:10:16.481298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.652 [2024-11-20 09:10:16.481304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.652 [2024-11-20 09:10:16.481319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.652 qpair failed and we were unable to recover it. 00:26:00.652 [2024-11-20 09:10:16.491248] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.652 [2024-11-20 09:10:16.491301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.652 [2024-11-20 09:10:16.491314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.652 [2024-11-20 09:10:16.491320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.652 [2024-11-20 09:10:16.491327] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.652 [2024-11-20 09:10:16.491341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.652 qpair failed and we were unable to recover it. 00:26:00.652 [2024-11-20 09:10:16.501265] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.652 [2024-11-20 09:10:16.501315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.652 [2024-11-20 09:10:16.501328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.652 [2024-11-20 09:10:16.501335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.652 [2024-11-20 09:10:16.501340] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.652 [2024-11-20 09:10:16.501356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.652 qpair failed and we were unable to recover it. 00:26:00.652 [2024-11-20 09:10:16.511291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.652 [2024-11-20 09:10:16.511371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.652 [2024-11-20 09:10:16.511384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.652 [2024-11-20 09:10:16.511391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.652 [2024-11-20 09:10:16.511396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.652 [2024-11-20 09:10:16.511411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.652 qpair failed and we were unable to recover it. 00:26:00.652 [2024-11-20 09:10:16.521324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.652 [2024-11-20 09:10:16.521381] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.652 [2024-11-20 09:10:16.521394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.652 [2024-11-20 09:10:16.521400] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.652 [2024-11-20 09:10:16.521406] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.652 [2024-11-20 09:10:16.521421] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.652 qpair failed and we were unable to recover it. 00:26:00.652 [2024-11-20 09:10:16.531369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.652 [2024-11-20 09:10:16.531426] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.652 [2024-11-20 09:10:16.531439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.652 [2024-11-20 09:10:16.531446] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.652 [2024-11-20 09:10:16.531452] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.652 [2024-11-20 09:10:16.531467] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.652 qpair failed and we were unable to recover it. 00:26:00.652 [2024-11-20 09:10:16.541412] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.652 [2024-11-20 09:10:16.541468] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.652 [2024-11-20 09:10:16.541485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.652 [2024-11-20 09:10:16.541491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.652 [2024-11-20 09:10:16.541497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.652 [2024-11-20 09:10:16.541513] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.652 qpair failed and we were unable to recover it. 00:26:00.652 [2024-11-20 09:10:16.551398] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.652 [2024-11-20 09:10:16.551467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.652 [2024-11-20 09:10:16.551480] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.652 [2024-11-20 09:10:16.551487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.652 [2024-11-20 09:10:16.551493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.652 [2024-11-20 09:10:16.551508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.652 qpair failed and we were unable to recover it. 00:26:00.652 [2024-11-20 09:10:16.561458] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.652 [2024-11-20 09:10:16.561515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.652 [2024-11-20 09:10:16.561528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.652 [2024-11-20 09:10:16.561534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.652 [2024-11-20 09:10:16.561540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.652 [2024-11-20 09:10:16.561555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.652 qpair failed and we were unable to recover it. 00:26:00.652 [2024-11-20 09:10:16.571473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.652 [2024-11-20 09:10:16.571529] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.652 [2024-11-20 09:10:16.571542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.652 [2024-11-20 09:10:16.571548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.652 [2024-11-20 09:10:16.571554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.652 [2024-11-20 09:10:16.571569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.652 qpair failed and we were unable to recover it. 00:26:00.652 [2024-11-20 09:10:16.581487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.652 [2024-11-20 09:10:16.581564] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.652 [2024-11-20 09:10:16.581578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.652 [2024-11-20 09:10:16.581587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.652 [2024-11-20 09:10:16.581593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.652 [2024-11-20 09:10:16.581607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.652 qpair failed and we were unable to recover it. 00:26:00.652 [2024-11-20 09:10:16.591450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.652 [2024-11-20 09:10:16.591507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.652 [2024-11-20 09:10:16.591522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.652 [2024-11-20 09:10:16.591529] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.653 [2024-11-20 09:10:16.591534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.653 [2024-11-20 09:10:16.591550] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.653 qpair failed and we were unable to recover it. 00:26:00.653 [2024-11-20 09:10:16.601549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.653 [2024-11-20 09:10:16.601604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.653 [2024-11-20 09:10:16.601617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.653 [2024-11-20 09:10:16.601624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.653 [2024-11-20 09:10:16.601630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.653 [2024-11-20 09:10:16.601645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.653 qpair failed and we were unable to recover it. 00:26:00.653 [2024-11-20 09:10:16.611509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.653 [2024-11-20 09:10:16.611568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.653 [2024-11-20 09:10:16.611581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.653 [2024-11-20 09:10:16.611588] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.653 [2024-11-20 09:10:16.611593] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.653 [2024-11-20 09:10:16.611608] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.653 qpair failed and we were unable to recover it. 00:26:00.653 [2024-11-20 09:10:16.621607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.653 [2024-11-20 09:10:16.621663] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.653 [2024-11-20 09:10:16.621676] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.653 [2024-11-20 09:10:16.621682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.653 [2024-11-20 09:10:16.621688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.653 [2024-11-20 09:10:16.621706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.653 qpair failed and we were unable to recover it. 00:26:00.653 [2024-11-20 09:10:16.631555] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.653 [2024-11-20 09:10:16.631609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.653 [2024-11-20 09:10:16.631623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.653 [2024-11-20 09:10:16.631629] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.653 [2024-11-20 09:10:16.631635] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.653 [2024-11-20 09:10:16.631649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.653 qpair failed and we were unable to recover it. 00:26:00.653 [2024-11-20 09:10:16.641682] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.653 [2024-11-20 09:10:16.641742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.653 [2024-11-20 09:10:16.641755] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.653 [2024-11-20 09:10:16.641761] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.653 [2024-11-20 09:10:16.641767] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.653 [2024-11-20 09:10:16.641782] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.653 qpair failed and we were unable to recover it. 00:26:00.653 [2024-11-20 09:10:16.651692] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.653 [2024-11-20 09:10:16.651748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.653 [2024-11-20 09:10:16.651761] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.653 [2024-11-20 09:10:16.651767] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.653 [2024-11-20 09:10:16.651773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.653 [2024-11-20 09:10:16.651788] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.653 qpair failed and we were unable to recover it. 00:26:00.653 [2024-11-20 09:10:16.661709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.653 [2024-11-20 09:10:16.661766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.653 [2024-11-20 09:10:16.661779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.653 [2024-11-20 09:10:16.661786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.653 [2024-11-20 09:10:16.661792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.653 [2024-11-20 09:10:16.661807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.653 qpair failed and we were unable to recover it. 00:26:00.653 [2024-11-20 09:10:16.671674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.653 [2024-11-20 09:10:16.671725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.653 [2024-11-20 09:10:16.671739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.653 [2024-11-20 09:10:16.671746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.653 [2024-11-20 09:10:16.671752] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.653 [2024-11-20 09:10:16.671767] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.653 qpair failed and we were unable to recover it. 00:26:00.653 [2024-11-20 09:10:16.681827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.653 [2024-11-20 09:10:16.681889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.653 [2024-11-20 09:10:16.681903] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.653 [2024-11-20 09:10:16.681909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.653 [2024-11-20 09:10:16.681915] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.653 [2024-11-20 09:10:16.681930] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.653 qpair failed and we were unable to recover it. 00:26:00.913 [2024-11-20 09:10:16.691823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.913 [2024-11-20 09:10:16.691881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.913 [2024-11-20 09:10:16.691896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.913 [2024-11-20 09:10:16.691903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.913 [2024-11-20 09:10:16.691909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.913 [2024-11-20 09:10:16.691923] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.913 qpair failed and we were unable to recover it. 00:26:00.913 [2024-11-20 09:10:16.701843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.913 [2024-11-20 09:10:16.701896] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.913 [2024-11-20 09:10:16.701910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.913 [2024-11-20 09:10:16.701917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.913 [2024-11-20 09:10:16.701923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.913 [2024-11-20 09:10:16.701938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.913 qpair failed and we were unable to recover it. 00:26:00.913 [2024-11-20 09:10:16.711882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.913 [2024-11-20 09:10:16.711934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.913 [2024-11-20 09:10:16.711953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.913 [2024-11-20 09:10:16.711963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.913 [2024-11-20 09:10:16.711969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.913 [2024-11-20 09:10:16.711985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.913 qpair failed and we were unable to recover it. 00:26:00.913 [2024-11-20 09:10:16.721910] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.913 [2024-11-20 09:10:16.721996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.913 [2024-11-20 09:10:16.722010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.913 [2024-11-20 09:10:16.722017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.913 [2024-11-20 09:10:16.722023] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.913 [2024-11-20 09:10:16.722037] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.913 qpair failed and we were unable to recover it. 00:26:00.913 [2024-11-20 09:10:16.731918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.913 [2024-11-20 09:10:16.731980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.913 [2024-11-20 09:10:16.731993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.913 [2024-11-20 09:10:16.732000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.913 [2024-11-20 09:10:16.732006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.913 [2024-11-20 09:10:16.732020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.913 qpair failed and we were unable to recover it. 00:26:00.913 [2024-11-20 09:10:16.741939] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.913 [2024-11-20 09:10:16.741998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.913 [2024-11-20 09:10:16.742011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.913 [2024-11-20 09:10:16.742018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.913 [2024-11-20 09:10:16.742024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.913 [2024-11-20 09:10:16.742039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.913 qpair failed and we were unable to recover it. 00:26:00.913 [2024-11-20 09:10:16.752001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.913 [2024-11-20 09:10:16.752063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.913 [2024-11-20 09:10:16.752077] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.913 [2024-11-20 09:10:16.752084] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.913 [2024-11-20 09:10:16.752090] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.913 [2024-11-20 09:10:16.752109] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.913 qpair failed and we were unable to recover it. 00:26:00.913 [2024-11-20 09:10:16.762040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.914 [2024-11-20 09:10:16.762103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.914 [2024-11-20 09:10:16.762118] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.914 [2024-11-20 09:10:16.762124] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.914 [2024-11-20 09:10:16.762130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.914 [2024-11-20 09:10:16.762145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.914 qpair failed and we were unable to recover it. 00:26:00.914 [2024-11-20 09:10:16.772000] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.914 [2024-11-20 09:10:16.772054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.914 [2024-11-20 09:10:16.772068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.914 [2024-11-20 09:10:16.772074] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.914 [2024-11-20 09:10:16.772080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.914 [2024-11-20 09:10:16.772095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.914 qpair failed and we were unable to recover it. 00:26:00.914 [2024-11-20 09:10:16.782006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.914 [2024-11-20 09:10:16.782061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.914 [2024-11-20 09:10:16.782074] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.914 [2024-11-20 09:10:16.782081] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.914 [2024-11-20 09:10:16.782087] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.914 [2024-11-20 09:10:16.782102] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.914 qpair failed and we were unable to recover it. 00:26:00.914 [2024-11-20 09:10:16.792081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.914 [2024-11-20 09:10:16.792133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.914 [2024-11-20 09:10:16.792146] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.914 [2024-11-20 09:10:16.792153] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.914 [2024-11-20 09:10:16.792159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.914 [2024-11-20 09:10:16.792174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.914 qpair failed and we were unable to recover it. 00:26:00.914 [2024-11-20 09:10:16.802043] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.914 [2024-11-20 09:10:16.802144] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.914 [2024-11-20 09:10:16.802158] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.914 [2024-11-20 09:10:16.802164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.914 [2024-11-20 09:10:16.802170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.914 [2024-11-20 09:10:16.802184] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.914 qpair failed and we were unable to recover it. 00:26:00.914 [2024-11-20 09:10:16.812066] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.914 [2024-11-20 09:10:16.812121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.914 [2024-11-20 09:10:16.812133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.914 [2024-11-20 09:10:16.812140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.914 [2024-11-20 09:10:16.812146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.914 [2024-11-20 09:10:16.812160] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.914 qpair failed and we were unable to recover it. 00:26:00.914 [2024-11-20 09:10:16.822114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.914 [2024-11-20 09:10:16.822166] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.914 [2024-11-20 09:10:16.822180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.914 [2024-11-20 09:10:16.822186] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.914 [2024-11-20 09:10:16.822193] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.914 [2024-11-20 09:10:16.822208] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.914 qpair failed and we were unable to recover it. 00:26:00.914 [2024-11-20 09:10:16.832120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.914 [2024-11-20 09:10:16.832176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.914 [2024-11-20 09:10:16.832189] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.914 [2024-11-20 09:10:16.832196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.914 [2024-11-20 09:10:16.832201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.914 [2024-11-20 09:10:16.832216] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.914 qpair failed and we were unable to recover it. 00:26:00.914 [2024-11-20 09:10:16.842225] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.914 [2024-11-20 09:10:16.842283] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.914 [2024-11-20 09:10:16.842298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.914 [2024-11-20 09:10:16.842305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.914 [2024-11-20 09:10:16.842311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.914 [2024-11-20 09:10:16.842325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.914 qpair failed and we were unable to recover it. 00:26:00.914 [2024-11-20 09:10:16.852197] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.914 [2024-11-20 09:10:16.852253] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.914 [2024-11-20 09:10:16.852266] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.914 [2024-11-20 09:10:16.852273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.914 [2024-11-20 09:10:16.852279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.914 [2024-11-20 09:10:16.852293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.914 qpair failed and we were unable to recover it. 00:26:00.914 [2024-11-20 09:10:16.862291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.914 [2024-11-20 09:10:16.862345] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.914 [2024-11-20 09:10:16.862358] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.914 [2024-11-20 09:10:16.862365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.914 [2024-11-20 09:10:16.862371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.914 [2024-11-20 09:10:16.862386] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.914 qpair failed and we were unable to recover it. 00:26:00.914 [2024-11-20 09:10:16.872233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.914 [2024-11-20 09:10:16.872298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.914 [2024-11-20 09:10:16.872312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.914 [2024-11-20 09:10:16.872319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.914 [2024-11-20 09:10:16.872325] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.914 [2024-11-20 09:10:16.872340] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.914 qpair failed and we were unable to recover it. 00:26:00.914 [2024-11-20 09:10:16.882369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.914 [2024-11-20 09:10:16.882433] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.914 [2024-11-20 09:10:16.882447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.914 [2024-11-20 09:10:16.882454] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.914 [2024-11-20 09:10:16.882462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.914 [2024-11-20 09:10:16.882478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.914 qpair failed and we were unable to recover it. 00:26:00.914 [2024-11-20 09:10:16.892295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.915 [2024-11-20 09:10:16.892351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.915 [2024-11-20 09:10:16.892365] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.915 [2024-11-20 09:10:16.892371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.915 [2024-11-20 09:10:16.892377] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.915 [2024-11-20 09:10:16.892392] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.915 qpair failed and we were unable to recover it. 00:26:00.915 [2024-11-20 09:10:16.902321] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.915 [2024-11-20 09:10:16.902371] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.915 [2024-11-20 09:10:16.902384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.915 [2024-11-20 09:10:16.902391] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.915 [2024-11-20 09:10:16.902398] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.915 [2024-11-20 09:10:16.902413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.915 qpair failed and we were unable to recover it. 00:26:00.915 [2024-11-20 09:10:16.912351] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.915 [2024-11-20 09:10:16.912403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.915 [2024-11-20 09:10:16.912416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.915 [2024-11-20 09:10:16.912422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.915 [2024-11-20 09:10:16.912428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.915 [2024-11-20 09:10:16.912443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.915 qpair failed and we were unable to recover it. 00:26:00.915 [2024-11-20 09:10:16.922459] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.915 [2024-11-20 09:10:16.922515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.915 [2024-11-20 09:10:16.922529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.915 [2024-11-20 09:10:16.922535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.915 [2024-11-20 09:10:16.922541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.915 [2024-11-20 09:10:16.922556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.915 qpair failed and we were unable to recover it. 00:26:00.915 [2024-11-20 09:10:16.932424] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.915 [2024-11-20 09:10:16.932513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.915 [2024-11-20 09:10:16.932526] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.915 [2024-11-20 09:10:16.932532] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.915 [2024-11-20 09:10:16.932538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.915 [2024-11-20 09:10:16.932553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.915 qpair failed and we were unable to recover it. 00:26:00.915 [2024-11-20 09:10:16.942452] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:00.915 [2024-11-20 09:10:16.942516] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:00.915 [2024-11-20 09:10:16.942528] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:00.915 [2024-11-20 09:10:16.942535] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:00.915 [2024-11-20 09:10:16.942541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:00.915 [2024-11-20 09:10:16.942555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:00.915 qpair failed and we were unable to recover it. 00:26:01.175 [2024-11-20 09:10:16.952541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.175 [2024-11-20 09:10:16.952609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.175 [2024-11-20 09:10:16.952623] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.175 [2024-11-20 09:10:16.952630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.175 [2024-11-20 09:10:16.952636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.175 [2024-11-20 09:10:16.952651] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.175 qpair failed and we were unable to recover it. 00:26:01.175 [2024-11-20 09:10:16.962534] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.175 [2024-11-20 09:10:16.962592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.175 [2024-11-20 09:10:16.962606] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.175 [2024-11-20 09:10:16.962612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.175 [2024-11-20 09:10:16.962618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.175 [2024-11-20 09:10:16.962633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.175 qpair failed and we were unable to recover it. 00:26:01.175 [2024-11-20 09:10:16.972546] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.175 [2024-11-20 09:10:16.972601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.175 [2024-11-20 09:10:16.972617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.175 [2024-11-20 09:10:16.972624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.175 [2024-11-20 09:10:16.972630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.175 [2024-11-20 09:10:16.972645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.175 qpair failed and we were unable to recover it. 00:26:01.175 [2024-11-20 09:10:16.982661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.175 [2024-11-20 09:10:16.982728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.175 [2024-11-20 09:10:16.982741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.175 [2024-11-20 09:10:16.982748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.175 [2024-11-20 09:10:16.982753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.175 [2024-11-20 09:10:16.982769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.175 qpair failed and we were unable to recover it. 00:26:01.175 [2024-11-20 09:10:16.992669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.175 [2024-11-20 09:10:16.992724] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.175 [2024-11-20 09:10:16.992738] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.175 [2024-11-20 09:10:16.992744] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.175 [2024-11-20 09:10:16.992750] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.175 [2024-11-20 09:10:16.992764] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.175 qpair failed and we were unable to recover it. 00:26:01.175 [2024-11-20 09:10:17.002622] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.175 [2024-11-20 09:10:17.002679] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.175 [2024-11-20 09:10:17.002692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.175 [2024-11-20 09:10:17.002699] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.175 [2024-11-20 09:10:17.002705] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.175 [2024-11-20 09:10:17.002719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.175 qpair failed and we were unable to recover it. 00:26:01.175 [2024-11-20 09:10:17.012769] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.175 [2024-11-20 09:10:17.012832] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.175 [2024-11-20 09:10:17.012846] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.175 [2024-11-20 09:10:17.012853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.175 [2024-11-20 09:10:17.012862] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.175 [2024-11-20 09:10:17.012877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.175 qpair failed and we were unable to recover it. 00:26:01.175 [2024-11-20 09:10:17.022749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.175 [2024-11-20 09:10:17.022801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.175 [2024-11-20 09:10:17.022814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.175 [2024-11-20 09:10:17.022820] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.175 [2024-11-20 09:10:17.022826] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.175 [2024-11-20 09:10:17.022841] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.175 qpair failed and we were unable to recover it. 00:26:01.175 [2024-11-20 09:10:17.032767] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.175 [2024-11-20 09:10:17.032816] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.175 [2024-11-20 09:10:17.032829] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.175 [2024-11-20 09:10:17.032836] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.175 [2024-11-20 09:10:17.032842] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.175 [2024-11-20 09:10:17.032857] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.175 qpair failed and we were unable to recover it. 00:26:01.176 [2024-11-20 09:10:17.042807] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.176 [2024-11-20 09:10:17.042864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.176 [2024-11-20 09:10:17.042878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.176 [2024-11-20 09:10:17.042884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.176 [2024-11-20 09:10:17.042890] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.176 [2024-11-20 09:10:17.042905] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.176 qpair failed and we were unable to recover it. 00:26:01.176 [2024-11-20 09:10:17.052830] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.176 [2024-11-20 09:10:17.052925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.176 [2024-11-20 09:10:17.052939] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.176 [2024-11-20 09:10:17.052945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.176 [2024-11-20 09:10:17.052961] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.176 [2024-11-20 09:10:17.052977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.176 qpair failed and we were unable to recover it. 00:26:01.176 [2024-11-20 09:10:17.062848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.176 [2024-11-20 09:10:17.062903] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.176 [2024-11-20 09:10:17.062916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.176 [2024-11-20 09:10:17.062923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.176 [2024-11-20 09:10:17.062929] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.176 [2024-11-20 09:10:17.062944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.176 qpair failed and we were unable to recover it. 00:26:01.176 [2024-11-20 09:10:17.072851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.176 [2024-11-20 09:10:17.072900] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.176 [2024-11-20 09:10:17.072913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.176 [2024-11-20 09:10:17.072920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.176 [2024-11-20 09:10:17.072926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.176 [2024-11-20 09:10:17.072941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.176 qpair failed and we were unable to recover it. 00:26:01.176 [2024-11-20 09:10:17.082945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.176 [2024-11-20 09:10:17.083006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.176 [2024-11-20 09:10:17.083020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.176 [2024-11-20 09:10:17.083027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.176 [2024-11-20 09:10:17.083033] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.176 [2024-11-20 09:10:17.083048] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.176 qpair failed and we were unable to recover it. 00:26:01.176 [2024-11-20 09:10:17.092949] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.176 [2024-11-20 09:10:17.093004] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.176 [2024-11-20 09:10:17.093017] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.176 [2024-11-20 09:10:17.093023] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.176 [2024-11-20 09:10:17.093030] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.176 [2024-11-20 09:10:17.093044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.176 qpair failed and we were unable to recover it. 00:26:01.176 [2024-11-20 09:10:17.102950] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.176 [2024-11-20 09:10:17.103150] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.176 [2024-11-20 09:10:17.103165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.176 [2024-11-20 09:10:17.103172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.176 [2024-11-20 09:10:17.103178] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.176 [2024-11-20 09:10:17.103193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.176 qpair failed and we were unable to recover it. 00:26:01.176 [2024-11-20 09:10:17.113007] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.176 [2024-11-20 09:10:17.113063] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.176 [2024-11-20 09:10:17.113076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.176 [2024-11-20 09:10:17.113083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.176 [2024-11-20 09:10:17.113089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.176 [2024-11-20 09:10:17.113105] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.176 qpair failed and we were unable to recover it. 00:26:01.176 [2024-11-20 09:10:17.123045] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.176 [2024-11-20 09:10:17.123103] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.176 [2024-11-20 09:10:17.123116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.176 [2024-11-20 09:10:17.123122] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.176 [2024-11-20 09:10:17.123129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.176 [2024-11-20 09:10:17.123143] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.176 qpair failed and we were unable to recover it. 00:26:01.176 [2024-11-20 09:10:17.133068] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.176 [2024-11-20 09:10:17.133123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.176 [2024-11-20 09:10:17.133137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.176 [2024-11-20 09:10:17.133143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.176 [2024-11-20 09:10:17.133149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.176 [2024-11-20 09:10:17.133164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.176 qpair failed and we were unable to recover it. 00:26:01.176 [2024-11-20 09:10:17.143150] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.176 [2024-11-20 09:10:17.143202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.176 [2024-11-20 09:10:17.143216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.176 [2024-11-20 09:10:17.143227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.176 [2024-11-20 09:10:17.143233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.176 [2024-11-20 09:10:17.143248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.176 qpair failed and we were unable to recover it. 00:26:01.176 [2024-11-20 09:10:17.153124] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.176 [2024-11-20 09:10:17.153177] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.176 [2024-11-20 09:10:17.153191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.176 [2024-11-20 09:10:17.153197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.176 [2024-11-20 09:10:17.153203] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.176 [2024-11-20 09:10:17.153218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.176 qpair failed and we were unable to recover it. 00:26:01.176 [2024-11-20 09:10:17.163153] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.176 [2024-11-20 09:10:17.163211] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.176 [2024-11-20 09:10:17.163224] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.176 [2024-11-20 09:10:17.163230] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.176 [2024-11-20 09:10:17.163236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.176 [2024-11-20 09:10:17.163251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.176 qpair failed and we were unable to recover it. 00:26:01.176 [2024-11-20 09:10:17.173173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.177 [2024-11-20 09:10:17.173257] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.177 [2024-11-20 09:10:17.173271] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.177 [2024-11-20 09:10:17.173278] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.177 [2024-11-20 09:10:17.173284] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.177 [2024-11-20 09:10:17.173298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.177 qpair failed and we were unable to recover it. 00:26:01.177 [2024-11-20 09:10:17.183200] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.177 [2024-11-20 09:10:17.183249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.177 [2024-11-20 09:10:17.183263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.177 [2024-11-20 09:10:17.183269] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.177 [2024-11-20 09:10:17.183275] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.177 [2024-11-20 09:10:17.183295] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.177 qpair failed and we were unable to recover it. 00:26:01.177 [2024-11-20 09:10:17.193224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.177 [2024-11-20 09:10:17.193272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.177 [2024-11-20 09:10:17.193286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.177 [2024-11-20 09:10:17.193292] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.177 [2024-11-20 09:10:17.193298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.177 [2024-11-20 09:10:17.193313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.177 qpair failed and we were unable to recover it. 00:26:01.177 [2024-11-20 09:10:17.203247] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.177 [2024-11-20 09:10:17.203308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.177 [2024-11-20 09:10:17.203321] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.177 [2024-11-20 09:10:17.203328] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.177 [2024-11-20 09:10:17.203333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.177 [2024-11-20 09:10:17.203348] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.177 qpair failed and we were unable to recover it. 00:26:01.435 [2024-11-20 09:10:17.213302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.435 [2024-11-20 09:10:17.213355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.435 [2024-11-20 09:10:17.213371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.435 [2024-11-20 09:10:17.213378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.435 [2024-11-20 09:10:17.213384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.435 [2024-11-20 09:10:17.213400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.435 qpair failed and we were unable to recover it. 00:26:01.435 [2024-11-20 09:10:17.223328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.435 [2024-11-20 09:10:17.223388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.435 [2024-11-20 09:10:17.223403] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.435 [2024-11-20 09:10:17.223410] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.435 [2024-11-20 09:10:17.223416] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.435 [2024-11-20 09:10:17.223432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.435 qpair failed and we were unable to recover it. 00:26:01.435 [2024-11-20 09:10:17.233342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.435 [2024-11-20 09:10:17.233410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.435 [2024-11-20 09:10:17.233423] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.435 [2024-11-20 09:10:17.233430] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.435 [2024-11-20 09:10:17.233436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.435 [2024-11-20 09:10:17.233451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.435 qpair failed and we were unable to recover it. 00:26:01.435 [2024-11-20 09:10:17.243378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.435 [2024-11-20 09:10:17.243442] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.435 [2024-11-20 09:10:17.243455] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.435 [2024-11-20 09:10:17.243461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.435 [2024-11-20 09:10:17.243467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.435 [2024-11-20 09:10:17.243483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.435 qpair failed and we were unable to recover it. 00:26:01.435 [2024-11-20 09:10:17.253336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.435 [2024-11-20 09:10:17.253392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.435 [2024-11-20 09:10:17.253406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.435 [2024-11-20 09:10:17.253413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.435 [2024-11-20 09:10:17.253419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.435 [2024-11-20 09:10:17.253434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.435 qpair failed and we were unable to recover it. 00:26:01.435 [2024-11-20 09:10:17.263428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.435 [2024-11-20 09:10:17.263481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.435 [2024-11-20 09:10:17.263494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.435 [2024-11-20 09:10:17.263500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.435 [2024-11-20 09:10:17.263506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.435 [2024-11-20 09:10:17.263521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.435 qpair failed and we were unable to recover it. 00:26:01.435 [2024-11-20 09:10:17.273462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.435 [2024-11-20 09:10:17.273517] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.435 [2024-11-20 09:10:17.273533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.435 [2024-11-20 09:10:17.273543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.435 [2024-11-20 09:10:17.273549] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.435 [2024-11-20 09:10:17.273564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.435 qpair failed and we were unable to recover it. 00:26:01.435 [2024-11-20 09:10:17.283508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.435 [2024-11-20 09:10:17.283570] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.435 [2024-11-20 09:10:17.283584] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.435 [2024-11-20 09:10:17.283591] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.435 [2024-11-20 09:10:17.283597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.435 [2024-11-20 09:10:17.283612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.435 qpair failed and we were unable to recover it. 00:26:01.435 [2024-11-20 09:10:17.293529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.435 [2024-11-20 09:10:17.293584] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.435 [2024-11-20 09:10:17.293598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.435 [2024-11-20 09:10:17.293605] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.435 [2024-11-20 09:10:17.293611] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.435 [2024-11-20 09:10:17.293626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.435 qpair failed and we were unable to recover it. 00:26:01.435 [2024-11-20 09:10:17.303562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.435 [2024-11-20 09:10:17.303619] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.435 [2024-11-20 09:10:17.303633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.435 [2024-11-20 09:10:17.303640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.435 [2024-11-20 09:10:17.303646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.435 [2024-11-20 09:10:17.303661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.435 qpair failed and we were unable to recover it. 00:26:01.435 [2024-11-20 09:10:17.313577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.435 [2024-11-20 09:10:17.313633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.435 [2024-11-20 09:10:17.313647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.435 [2024-11-20 09:10:17.313654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.435 [2024-11-20 09:10:17.313660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.435 [2024-11-20 09:10:17.313678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.435 qpair failed and we were unable to recover it. 00:26:01.435 [2024-11-20 09:10:17.323612] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.435 [2024-11-20 09:10:17.323667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.435 [2024-11-20 09:10:17.323680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.435 [2024-11-20 09:10:17.323686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.435 [2024-11-20 09:10:17.323692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.435 [2024-11-20 09:10:17.323708] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.435 qpair failed and we were unable to recover it. 00:26:01.435 [2024-11-20 09:10:17.333629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.435 [2024-11-20 09:10:17.333686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.435 [2024-11-20 09:10:17.333698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.435 [2024-11-20 09:10:17.333705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.435 [2024-11-20 09:10:17.333711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.435 [2024-11-20 09:10:17.333726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.435 qpair failed and we were unable to recover it. 00:26:01.435 [2024-11-20 09:10:17.343669] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.435 [2024-11-20 09:10:17.343728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.435 [2024-11-20 09:10:17.343742] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.435 [2024-11-20 09:10:17.343749] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.435 [2024-11-20 09:10:17.343755] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.435 [2024-11-20 09:10:17.343770] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.435 qpair failed and we were unable to recover it. 00:26:01.435 [2024-11-20 09:10:17.353688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.435 [2024-11-20 09:10:17.353764] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.435 [2024-11-20 09:10:17.353778] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.435 [2024-11-20 09:10:17.353785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.435 [2024-11-20 09:10:17.353791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.435 [2024-11-20 09:10:17.353806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.435 qpair failed and we were unable to recover it. 00:26:01.435 [2024-11-20 09:10:17.363725] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.435 [2024-11-20 09:10:17.363779] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.435 [2024-11-20 09:10:17.363792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.435 [2024-11-20 09:10:17.363799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.435 [2024-11-20 09:10:17.363805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.435 [2024-11-20 09:10:17.363819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.435 qpair failed and we were unable to recover it. 00:26:01.435 [2024-11-20 09:10:17.373761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.435 [2024-11-20 09:10:17.373811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.435 [2024-11-20 09:10:17.373824] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.435 [2024-11-20 09:10:17.373831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.435 [2024-11-20 09:10:17.373837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.435 [2024-11-20 09:10:17.373852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.435 qpair failed and we were unable to recover it. 00:26:01.435 [2024-11-20 09:10:17.383785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.435 [2024-11-20 09:10:17.383837] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.435 [2024-11-20 09:10:17.383851] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.435 [2024-11-20 09:10:17.383857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.435 [2024-11-20 09:10:17.383864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.435 [2024-11-20 09:10:17.383878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.435 qpair failed and we were unable to recover it. 00:26:01.435 [2024-11-20 09:10:17.393800] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.435 [2024-11-20 09:10:17.393850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.435 [2024-11-20 09:10:17.393864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.435 [2024-11-20 09:10:17.393870] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.435 [2024-11-20 09:10:17.393877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.435 [2024-11-20 09:10:17.393891] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.435 qpair failed and we were unable to recover it. 00:26:01.435 [2024-11-20 09:10:17.403917] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.435 [2024-11-20 09:10:17.403989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.435 [2024-11-20 09:10:17.404006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.435 [2024-11-20 09:10:17.404013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.435 [2024-11-20 09:10:17.404019] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.436 [2024-11-20 09:10:17.404034] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.436 qpair failed and we were unable to recover it. 00:26:01.436 [2024-11-20 09:10:17.413902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.436 [2024-11-20 09:10:17.413982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.436 [2024-11-20 09:10:17.413996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.436 [2024-11-20 09:10:17.414003] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.436 [2024-11-20 09:10:17.414009] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.436 [2024-11-20 09:10:17.414024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.436 qpair failed and we were unable to recover it. 00:26:01.436 [2024-11-20 09:10:17.423928] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.436 [2024-11-20 09:10:17.423986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.436 [2024-11-20 09:10:17.423999] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.436 [2024-11-20 09:10:17.424006] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.436 [2024-11-20 09:10:17.424012] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.436 [2024-11-20 09:10:17.424027] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.436 qpair failed and we were unable to recover it. 00:26:01.436 [2024-11-20 09:10:17.433968] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.436 [2024-11-20 09:10:17.434025] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.436 [2024-11-20 09:10:17.434038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.436 [2024-11-20 09:10:17.434044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.436 [2024-11-20 09:10:17.434050] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.436 [2024-11-20 09:10:17.434065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.436 qpair failed and we were unable to recover it. 00:26:01.436 [2024-11-20 09:10:17.443961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.436 [2024-11-20 09:10:17.444036] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.436 [2024-11-20 09:10:17.444049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.436 [2024-11-20 09:10:17.444056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.436 [2024-11-20 09:10:17.444065] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.436 [2024-11-20 09:10:17.444080] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.436 qpair failed and we were unable to recover it. 00:26:01.436 [2024-11-20 09:10:17.453963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.436 [2024-11-20 09:10:17.454018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.436 [2024-11-20 09:10:17.454032] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.436 [2024-11-20 09:10:17.454038] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.436 [2024-11-20 09:10:17.454045] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.436 [2024-11-20 09:10:17.454060] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.436 qpair failed and we were unable to recover it. 00:26:01.436 [2024-11-20 09:10:17.464050] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.436 [2024-11-20 09:10:17.464108] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.436 [2024-11-20 09:10:17.464121] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.436 [2024-11-20 09:10:17.464128] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.436 [2024-11-20 09:10:17.464134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.436 [2024-11-20 09:10:17.464149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.436 qpair failed and we were unable to recover it. 00:26:01.695 [2024-11-20 09:10:17.474064] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.695 [2024-11-20 09:10:17.474138] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.695 [2024-11-20 09:10:17.474154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.695 [2024-11-20 09:10:17.474161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.695 [2024-11-20 09:10:17.474167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.695 [2024-11-20 09:10:17.474183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.695 qpair failed and we were unable to recover it. 00:26:01.695 [2024-11-20 09:10:17.484096] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.695 [2024-11-20 09:10:17.484160] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.695 [2024-11-20 09:10:17.484174] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.695 [2024-11-20 09:10:17.484181] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.695 [2024-11-20 09:10:17.484187] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.695 [2024-11-20 09:10:17.484202] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.695 qpair failed and we were unable to recover it. 00:26:01.695 [2024-11-20 09:10:17.494120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.695 [2024-11-20 09:10:17.494173] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.695 [2024-11-20 09:10:17.494186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.695 [2024-11-20 09:10:17.494193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.695 [2024-11-20 09:10:17.494199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.695 [2024-11-20 09:10:17.494214] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.695 qpair failed and we were unable to recover it. 00:26:01.695 [2024-11-20 09:10:17.504118] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.695 [2024-11-20 09:10:17.504172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.695 [2024-11-20 09:10:17.504186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.695 [2024-11-20 09:10:17.504192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.695 [2024-11-20 09:10:17.504199] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.695 [2024-11-20 09:10:17.504213] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.695 qpair failed and we were unable to recover it. 00:26:01.695 [2024-11-20 09:10:17.514140] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.695 [2024-11-20 09:10:17.514197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.695 [2024-11-20 09:10:17.514210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.695 [2024-11-20 09:10:17.514216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.695 [2024-11-20 09:10:17.514222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.695 [2024-11-20 09:10:17.514237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.695 qpair failed and we were unable to recover it. 00:26:01.695 [2024-11-20 09:10:17.524179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.695 [2024-11-20 09:10:17.524236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.695 [2024-11-20 09:10:17.524249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.695 [2024-11-20 09:10:17.524256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.695 [2024-11-20 09:10:17.524261] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.695 [2024-11-20 09:10:17.524277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.695 qpair failed and we were unable to recover it. 00:26:01.695 [2024-11-20 09:10:17.534206] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.695 [2024-11-20 09:10:17.534260] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.695 [2024-11-20 09:10:17.534276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.695 [2024-11-20 09:10:17.534283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.695 [2024-11-20 09:10:17.534289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.695 [2024-11-20 09:10:17.534303] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.695 qpair failed and we were unable to recover it. 00:26:01.695 [2024-11-20 09:10:17.544240] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.695 [2024-11-20 09:10:17.544299] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.695 [2024-11-20 09:10:17.544313] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.695 [2024-11-20 09:10:17.544320] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.695 [2024-11-20 09:10:17.544326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.696 [2024-11-20 09:10:17.544341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.696 qpair failed and we were unable to recover it. 00:26:01.696 [2024-11-20 09:10:17.554262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.696 [2024-11-20 09:10:17.554320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.696 [2024-11-20 09:10:17.554334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.696 [2024-11-20 09:10:17.554341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.696 [2024-11-20 09:10:17.554347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.696 [2024-11-20 09:10:17.554361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.696 qpair failed and we were unable to recover it. 00:26:01.696 [2024-11-20 09:10:17.564291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.696 [2024-11-20 09:10:17.564356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.696 [2024-11-20 09:10:17.564369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.696 [2024-11-20 09:10:17.564377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.696 [2024-11-20 09:10:17.564383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.696 [2024-11-20 09:10:17.564397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.696 qpair failed and we were unable to recover it. 00:26:01.696 [2024-11-20 09:10:17.574362] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.696 [2024-11-20 09:10:17.574416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.696 [2024-11-20 09:10:17.574430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.696 [2024-11-20 09:10:17.574436] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.696 [2024-11-20 09:10:17.574445] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.696 [2024-11-20 09:10:17.574460] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.696 qpair failed and we were unable to recover it. 00:26:01.696 [2024-11-20 09:10:17.584350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.696 [2024-11-20 09:10:17.584403] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.696 [2024-11-20 09:10:17.584416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.696 [2024-11-20 09:10:17.584422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.696 [2024-11-20 09:10:17.584428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.696 [2024-11-20 09:10:17.584443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.696 qpair failed and we were unable to recover it. 00:26:01.696 [2024-11-20 09:10:17.594392] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.696 [2024-11-20 09:10:17.594444] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.696 [2024-11-20 09:10:17.594457] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.696 [2024-11-20 09:10:17.594463] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.696 [2024-11-20 09:10:17.594470] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.696 [2024-11-20 09:10:17.594485] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.696 qpair failed and we were unable to recover it. 00:26:01.696 [2024-11-20 09:10:17.604405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.696 [2024-11-20 09:10:17.604462] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.696 [2024-11-20 09:10:17.604476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.696 [2024-11-20 09:10:17.604483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.696 [2024-11-20 09:10:17.604489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.696 [2024-11-20 09:10:17.604504] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.696 qpair failed and we were unable to recover it. 00:26:01.696 [2024-11-20 09:10:17.614434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.696 [2024-11-20 09:10:17.614491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.696 [2024-11-20 09:10:17.614505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.696 [2024-11-20 09:10:17.614511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.696 [2024-11-20 09:10:17.614518] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.696 [2024-11-20 09:10:17.614532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.696 qpair failed and we were unable to recover it. 00:26:01.696 [2024-11-20 09:10:17.624457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.696 [2024-11-20 09:10:17.624508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.696 [2024-11-20 09:10:17.624522] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.696 [2024-11-20 09:10:17.624528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.696 [2024-11-20 09:10:17.624534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.696 [2024-11-20 09:10:17.624549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.696 qpair failed and we were unable to recover it. 00:26:01.696 [2024-11-20 09:10:17.634510] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.696 [2024-11-20 09:10:17.634568] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.696 [2024-11-20 09:10:17.634581] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.696 [2024-11-20 09:10:17.634587] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.696 [2024-11-20 09:10:17.634594] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.696 [2024-11-20 09:10:17.634609] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.696 qpair failed and we were unable to recover it. 00:26:01.696 [2024-11-20 09:10:17.644532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.696 [2024-11-20 09:10:17.644601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.696 [2024-11-20 09:10:17.644614] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.696 [2024-11-20 09:10:17.644621] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.696 [2024-11-20 09:10:17.644627] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.696 [2024-11-20 09:10:17.644642] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.696 qpair failed and we were unable to recover it. 00:26:01.696 [2024-11-20 09:10:17.654550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.696 [2024-11-20 09:10:17.654607] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.696 [2024-11-20 09:10:17.654621] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.696 [2024-11-20 09:10:17.654627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.696 [2024-11-20 09:10:17.654634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.696 [2024-11-20 09:10:17.654649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.696 qpair failed and we were unable to recover it. 00:26:01.696 [2024-11-20 09:10:17.664582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.696 [2024-11-20 09:10:17.664635] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.696 [2024-11-20 09:10:17.664649] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.696 [2024-11-20 09:10:17.664656] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.696 [2024-11-20 09:10:17.664662] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.696 [2024-11-20 09:10:17.664676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.696 qpair failed and we were unable to recover it. 00:26:01.696 [2024-11-20 09:10:17.674600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.696 [2024-11-20 09:10:17.674655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.696 [2024-11-20 09:10:17.674669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.696 [2024-11-20 09:10:17.674675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.696 [2024-11-20 09:10:17.674681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.697 [2024-11-20 09:10:17.674696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.697 qpair failed and we were unable to recover it. 00:26:01.697 [2024-11-20 09:10:17.684635] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.697 [2024-11-20 09:10:17.684692] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.697 [2024-11-20 09:10:17.684706] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.697 [2024-11-20 09:10:17.684712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.697 [2024-11-20 09:10:17.684718] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.697 [2024-11-20 09:10:17.684733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.697 qpair failed and we were unable to recover it. 00:26:01.697 [2024-11-20 09:10:17.694676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.697 [2024-11-20 09:10:17.694732] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.697 [2024-11-20 09:10:17.694745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.697 [2024-11-20 09:10:17.694752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.697 [2024-11-20 09:10:17.694758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.697 [2024-11-20 09:10:17.694773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.697 qpair failed and we were unable to recover it. 00:26:01.697 [2024-11-20 09:10:17.704703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.697 [2024-11-20 09:10:17.704766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.697 [2024-11-20 09:10:17.704779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.697 [2024-11-20 09:10:17.704788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.697 [2024-11-20 09:10:17.704794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.697 [2024-11-20 09:10:17.704810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.697 qpair failed and we were unable to recover it. 00:26:01.697 [2024-11-20 09:10:17.714716] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.697 [2024-11-20 09:10:17.714771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.697 [2024-11-20 09:10:17.714785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.697 [2024-11-20 09:10:17.714792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.697 [2024-11-20 09:10:17.714798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.697 [2024-11-20 09:10:17.714813] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.697 qpair failed and we were unable to recover it. 00:26:01.697 [2024-11-20 09:10:17.724783] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.697 [2024-11-20 09:10:17.724854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.697 [2024-11-20 09:10:17.724867] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.697 [2024-11-20 09:10:17.724874] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.697 [2024-11-20 09:10:17.724880] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.697 [2024-11-20 09:10:17.724895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.697 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-20 09:10:17.734813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.957 [2024-11-20 09:10:17.734892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.957 [2024-11-20 09:10:17.734907] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.957 [2024-11-20 09:10:17.734915] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.957 [2024-11-20 09:10:17.734921] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.957 [2024-11-20 09:10:17.734936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-20 09:10:17.744812] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.957 [2024-11-20 09:10:17.744869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.957 [2024-11-20 09:10:17.744883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.957 [2024-11-20 09:10:17.744890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.957 [2024-11-20 09:10:17.744896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.957 [2024-11-20 09:10:17.744915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-20 09:10:17.754866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.957 [2024-11-20 09:10:17.754922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.957 [2024-11-20 09:10:17.754936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.957 [2024-11-20 09:10:17.754943] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.957 [2024-11-20 09:10:17.754952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.957 [2024-11-20 09:10:17.754968] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-20 09:10:17.764845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.957 [2024-11-20 09:10:17.764901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.957 [2024-11-20 09:10:17.764915] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.957 [2024-11-20 09:10:17.764922] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.957 [2024-11-20 09:10:17.764928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.957 [2024-11-20 09:10:17.764943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-20 09:10:17.774892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.957 [2024-11-20 09:10:17.774951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.957 [2024-11-20 09:10:17.774965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.957 [2024-11-20 09:10:17.774972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.957 [2024-11-20 09:10:17.774978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.957 [2024-11-20 09:10:17.774993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-20 09:10:17.784914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.957 [2024-11-20 09:10:17.784984] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.957 [2024-11-20 09:10:17.784997] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.957 [2024-11-20 09:10:17.785004] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.957 [2024-11-20 09:10:17.785010] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.957 [2024-11-20 09:10:17.785026] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-20 09:10:17.794860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.957 [2024-11-20 09:10:17.794916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.957 [2024-11-20 09:10:17.794929] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.957 [2024-11-20 09:10:17.794936] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.957 [2024-11-20 09:10:17.794942] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.957 [2024-11-20 09:10:17.794961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-20 09:10:17.804992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.957 [2024-11-20 09:10:17.805070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.957 [2024-11-20 09:10:17.805084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.957 [2024-11-20 09:10:17.805090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.957 [2024-11-20 09:10:17.805096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.957 [2024-11-20 09:10:17.805111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-20 09:10:17.815016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.957 [2024-11-20 09:10:17.815070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.957 [2024-11-20 09:10:17.815084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.957 [2024-11-20 09:10:17.815090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.957 [2024-11-20 09:10:17.815096] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.957 [2024-11-20 09:10:17.815111] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.957 qpair failed and we were unable to recover it. 00:26:01.957 [2024-11-20 09:10:17.825029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.957 [2024-11-20 09:10:17.825088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.957 [2024-11-20 09:10:17.825102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.957 [2024-11-20 09:10:17.825108] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.957 [2024-11-20 09:10:17.825114] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.958 [2024-11-20 09:10:17.825129] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-20 09:10:17.835056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.958 [2024-11-20 09:10:17.835157] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.958 [2024-11-20 09:10:17.835170] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.958 [2024-11-20 09:10:17.835180] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.958 [2024-11-20 09:10:17.835186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.958 [2024-11-20 09:10:17.835201] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-20 09:10:17.845120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.958 [2024-11-20 09:10:17.845176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.958 [2024-11-20 09:10:17.845190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.958 [2024-11-20 09:10:17.845197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.958 [2024-11-20 09:10:17.845204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.958 [2024-11-20 09:10:17.845218] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-20 09:10:17.855126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.958 [2024-11-20 09:10:17.855218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.958 [2024-11-20 09:10:17.855232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.958 [2024-11-20 09:10:17.855238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.958 [2024-11-20 09:10:17.855244] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.958 [2024-11-20 09:10:17.855259] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-20 09:10:17.865143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.958 [2024-11-20 09:10:17.865197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.958 [2024-11-20 09:10:17.865210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.958 [2024-11-20 09:10:17.865216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.958 [2024-11-20 09:10:17.865222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.958 [2024-11-20 09:10:17.865237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-20 09:10:17.875163] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.958 [2024-11-20 09:10:17.875217] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.958 [2024-11-20 09:10:17.875231] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.958 [2024-11-20 09:10:17.875238] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.958 [2024-11-20 09:10:17.875245] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.958 [2024-11-20 09:10:17.875262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-20 09:10:17.885221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.958 [2024-11-20 09:10:17.885293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.958 [2024-11-20 09:10:17.885307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.958 [2024-11-20 09:10:17.885314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.958 [2024-11-20 09:10:17.885319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.958 [2024-11-20 09:10:17.885334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-20 09:10:17.895220] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.958 [2024-11-20 09:10:17.895278] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.958 [2024-11-20 09:10:17.895291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.958 [2024-11-20 09:10:17.895298] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.958 [2024-11-20 09:10:17.895304] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.958 [2024-11-20 09:10:17.895319] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-20 09:10:17.905250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.958 [2024-11-20 09:10:17.905320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.958 [2024-11-20 09:10:17.905334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.958 [2024-11-20 09:10:17.905340] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.958 [2024-11-20 09:10:17.905347] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.958 [2024-11-20 09:10:17.905361] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-20 09:10:17.915261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.958 [2024-11-20 09:10:17.915311] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.958 [2024-11-20 09:10:17.915325] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.958 [2024-11-20 09:10:17.915331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.958 [2024-11-20 09:10:17.915337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.958 [2024-11-20 09:10:17.915351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-20 09:10:17.925281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.958 [2024-11-20 09:10:17.925342] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.958 [2024-11-20 09:10:17.925356] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.958 [2024-11-20 09:10:17.925362] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.958 [2024-11-20 09:10:17.925368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.958 [2024-11-20 09:10:17.925383] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-20 09:10:17.935339] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.958 [2024-11-20 09:10:17.935394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.958 [2024-11-20 09:10:17.935407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.958 [2024-11-20 09:10:17.935413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.958 [2024-11-20 09:10:17.935420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.958 [2024-11-20 09:10:17.935435] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-20 09:10:17.945360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.958 [2024-11-20 09:10:17.945414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.958 [2024-11-20 09:10:17.945427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.958 [2024-11-20 09:10:17.945433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.958 [2024-11-20 09:10:17.945440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.958 [2024-11-20 09:10:17.945454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.958 qpair failed and we were unable to recover it. 00:26:01.958 [2024-11-20 09:10:17.955378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.958 [2024-11-20 09:10:17.955455] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.958 [2024-11-20 09:10:17.955469] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.958 [2024-11-20 09:10:17.955475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.958 [2024-11-20 09:10:17.955481] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.959 [2024-11-20 09:10:17.955496] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-20 09:10:17.965417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.959 [2024-11-20 09:10:17.965496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.959 [2024-11-20 09:10:17.965515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.959 [2024-11-20 09:10:17.965522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.959 [2024-11-20 09:10:17.965527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.959 [2024-11-20 09:10:17.965542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-20 09:10:17.975482] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.959 [2024-11-20 09:10:17.975540] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.959 [2024-11-20 09:10:17.975554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.959 [2024-11-20 09:10:17.975561] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.959 [2024-11-20 09:10:17.975566] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.959 [2024-11-20 09:10:17.975581] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.959 qpair failed and we were unable to recover it. 00:26:01.959 [2024-11-20 09:10:17.985465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:01.959 [2024-11-20 09:10:17.985520] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:01.959 [2024-11-20 09:10:17.985534] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:01.959 [2024-11-20 09:10:17.985541] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:01.959 [2024-11-20 09:10:17.985548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:01.959 [2024-11-20 09:10:17.985562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:01.959 qpair failed and we were unable to recover it. 00:26:02.219 [2024-11-20 09:10:17.995516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.219 [2024-11-20 09:10:17.995578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.219 [2024-11-20 09:10:17.995592] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.219 [2024-11-20 09:10:17.995599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.219 [2024-11-20 09:10:17.995606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.219 [2024-11-20 09:10:17.995621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.219 qpair failed and we were unable to recover it. 00:26:02.219 [2024-11-20 09:10:18.005588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.219 [2024-11-20 09:10:18.005696] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.219 [2024-11-20 09:10:18.005711] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.219 [2024-11-20 09:10:18.005718] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.219 [2024-11-20 09:10:18.005727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.219 [2024-11-20 09:10:18.005743] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.219 qpair failed and we were unable to recover it. 00:26:02.219 [2024-11-20 09:10:18.015613] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.219 [2024-11-20 09:10:18.015670] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.219 [2024-11-20 09:10:18.015684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.219 [2024-11-20 09:10:18.015692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.219 [2024-11-20 09:10:18.015698] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.219 [2024-11-20 09:10:18.015712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.219 qpair failed and we were unable to recover it. 00:26:02.219 [2024-11-20 09:10:18.025632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.219 [2024-11-20 09:10:18.025689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.219 [2024-11-20 09:10:18.025703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.219 [2024-11-20 09:10:18.025709] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.219 [2024-11-20 09:10:18.025715] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.219 [2024-11-20 09:10:18.025730] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.219 qpair failed and we were unable to recover it. 00:26:02.219 [2024-11-20 09:10:18.035675] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.219 [2024-11-20 09:10:18.035731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.219 [2024-11-20 09:10:18.035745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.219 [2024-11-20 09:10:18.035752] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.219 [2024-11-20 09:10:18.035759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.219 [2024-11-20 09:10:18.035774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.219 qpair failed and we were unable to recover it. 00:26:02.219 [2024-11-20 09:10:18.045678] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.219 [2024-11-20 09:10:18.045734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.219 [2024-11-20 09:10:18.045749] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.219 [2024-11-20 09:10:18.045755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.219 [2024-11-20 09:10:18.045761] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.219 [2024-11-20 09:10:18.045777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.219 qpair failed and we were unable to recover it. 00:26:02.219 [2024-11-20 09:10:18.055676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.219 [2024-11-20 09:10:18.055733] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.219 [2024-11-20 09:10:18.055747] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.219 [2024-11-20 09:10:18.055753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.219 [2024-11-20 09:10:18.055759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.219 [2024-11-20 09:10:18.055774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.219 qpair failed and we were unable to recover it. 00:26:02.219 [2024-11-20 09:10:18.065738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.219 [2024-11-20 09:10:18.065802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.219 [2024-11-20 09:10:18.065816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.219 [2024-11-20 09:10:18.065823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.219 [2024-11-20 09:10:18.065830] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.219 [2024-11-20 09:10:18.065846] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.219 qpair failed and we were unable to recover it. 00:26:02.219 [2024-11-20 09:10:18.075709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.219 [2024-11-20 09:10:18.075765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.219 [2024-11-20 09:10:18.075779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.219 [2024-11-20 09:10:18.075786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.219 [2024-11-20 09:10:18.075792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.219 [2024-11-20 09:10:18.075806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.219 qpair failed and we were unable to recover it. 00:26:02.219 [2024-11-20 09:10:18.085747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.219 [2024-11-20 09:10:18.085805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.219 [2024-11-20 09:10:18.085818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.219 [2024-11-20 09:10:18.085825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.219 [2024-11-20 09:10:18.085831] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.219 [2024-11-20 09:10:18.085845] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.219 qpair failed and we were unable to recover it. 00:26:02.219 [2024-11-20 09:10:18.095707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.219 [2024-11-20 09:10:18.095766] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.219 [2024-11-20 09:10:18.095784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.219 [2024-11-20 09:10:18.095791] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.219 [2024-11-20 09:10:18.095797] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.219 [2024-11-20 09:10:18.095811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.219 qpair failed and we were unable to recover it. 00:26:02.219 [2024-11-20 09:10:18.105799] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.220 [2024-11-20 09:10:18.105850] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.220 [2024-11-20 09:10:18.105863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.220 [2024-11-20 09:10:18.105869] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.220 [2024-11-20 09:10:18.105875] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.220 [2024-11-20 09:10:18.105890] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.220 qpair failed and we were unable to recover it. 00:26:02.220 [2024-11-20 09:10:18.115832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.220 [2024-11-20 09:10:18.115882] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.220 [2024-11-20 09:10:18.115895] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.220 [2024-11-20 09:10:18.115902] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.220 [2024-11-20 09:10:18.115908] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.220 [2024-11-20 09:10:18.115922] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.220 qpair failed and we were unable to recover it. 00:26:02.220 [2024-11-20 09:10:18.125866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.220 [2024-11-20 09:10:18.125922] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.220 [2024-11-20 09:10:18.125935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.220 [2024-11-20 09:10:18.125942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.220 [2024-11-20 09:10:18.125953] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.220 [2024-11-20 09:10:18.125969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.220 qpair failed and we were unable to recover it. 00:26:02.220 [2024-11-20 09:10:18.135966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.220 [2024-11-20 09:10:18.136030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.220 [2024-11-20 09:10:18.136043] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.220 [2024-11-20 09:10:18.136050] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.220 [2024-11-20 09:10:18.136059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.220 [2024-11-20 09:10:18.136074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.220 qpair failed and we were unable to recover it. 00:26:02.220 [2024-11-20 09:10:18.145935] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.220 [2024-11-20 09:10:18.146015] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.220 [2024-11-20 09:10:18.146028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.220 [2024-11-20 09:10:18.146035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.220 [2024-11-20 09:10:18.146041] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.220 [2024-11-20 09:10:18.146055] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.220 qpair failed and we were unable to recover it. 00:26:02.220 [2024-11-20 09:10:18.155957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.220 [2024-11-20 09:10:18.156012] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.220 [2024-11-20 09:10:18.156026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.220 [2024-11-20 09:10:18.156032] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.220 [2024-11-20 09:10:18.156038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.220 [2024-11-20 09:10:18.156053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.220 qpair failed and we were unable to recover it. 00:26:02.220 [2024-11-20 09:10:18.165943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.220 [2024-11-20 09:10:18.166056] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.220 [2024-11-20 09:10:18.166069] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.220 [2024-11-20 09:10:18.166076] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.220 [2024-11-20 09:10:18.166082] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.220 [2024-11-20 09:10:18.166095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.220 qpair failed and we were unable to recover it. 00:26:02.220 [2024-11-20 09:10:18.176024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.220 [2024-11-20 09:10:18.176081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.220 [2024-11-20 09:10:18.176094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.220 [2024-11-20 09:10:18.176101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.220 [2024-11-20 09:10:18.176107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.220 [2024-11-20 09:10:18.176121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.220 qpair failed and we were unable to recover it. 00:26:02.220 [2024-11-20 09:10:18.186059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.220 [2024-11-20 09:10:18.186110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.220 [2024-11-20 09:10:18.186123] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.220 [2024-11-20 09:10:18.186129] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.220 [2024-11-20 09:10:18.186135] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.220 [2024-11-20 09:10:18.186150] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.220 qpair failed and we were unable to recover it. 00:26:02.220 [2024-11-20 09:10:18.196085] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.220 [2024-11-20 09:10:18.196140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.220 [2024-11-20 09:10:18.196153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.220 [2024-11-20 09:10:18.196160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.220 [2024-11-20 09:10:18.196166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.220 [2024-11-20 09:10:18.196181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.220 qpair failed and we were unable to recover it. 00:26:02.220 [2024-11-20 09:10:18.206147] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.220 [2024-11-20 09:10:18.206203] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.220 [2024-11-20 09:10:18.206216] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.220 [2024-11-20 09:10:18.206223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.220 [2024-11-20 09:10:18.206229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.220 [2024-11-20 09:10:18.206244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.220 qpair failed and we were unable to recover it. 00:26:02.220 [2024-11-20 09:10:18.216146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.220 [2024-11-20 09:10:18.216202] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.220 [2024-11-20 09:10:18.216215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.220 [2024-11-20 09:10:18.216221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.220 [2024-11-20 09:10:18.216228] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.220 [2024-11-20 09:10:18.216242] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.220 qpair failed and we were unable to recover it. 00:26:02.220 [2024-11-20 09:10:18.226137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.220 [2024-11-20 09:10:18.226194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.220 [2024-11-20 09:10:18.226210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.220 [2024-11-20 09:10:18.226217] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.220 [2024-11-20 09:10:18.226223] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.220 [2024-11-20 09:10:18.226238] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.220 qpair failed and we were unable to recover it. 00:26:02.220 [2024-11-20 09:10:18.236192] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.220 [2024-11-20 09:10:18.236247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.221 [2024-11-20 09:10:18.236261] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.221 [2024-11-20 09:10:18.236268] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.221 [2024-11-20 09:10:18.236274] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.221 [2024-11-20 09:10:18.236288] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.221 qpair failed and we were unable to recover it. 00:26:02.221 [2024-11-20 09:10:18.246234] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.221 [2024-11-20 09:10:18.246309] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.221 [2024-11-20 09:10:18.246323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.221 [2024-11-20 09:10:18.246329] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.221 [2024-11-20 09:10:18.246335] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.221 [2024-11-20 09:10:18.246350] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.221 qpair failed and we were unable to recover it. 00:26:02.221 [2024-11-20 09:10:18.256324] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.221 [2024-11-20 09:10:18.256385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.221 [2024-11-20 09:10:18.256400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.221 [2024-11-20 09:10:18.256406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.221 [2024-11-20 09:10:18.256412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.221 [2024-11-20 09:10:18.256428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.221 qpair failed and we were unable to recover it. 00:26:02.481 [2024-11-20 09:10:18.266284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.481 [2024-11-20 09:10:18.266340] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.481 [2024-11-20 09:10:18.266354] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.481 [2024-11-20 09:10:18.266364] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.481 [2024-11-20 09:10:18.266371] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.481 [2024-11-20 09:10:18.266385] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.481 qpair failed and we were unable to recover it. 00:26:02.481 [2024-11-20 09:10:18.276293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.481 [2024-11-20 09:10:18.276344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.481 [2024-11-20 09:10:18.276359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.481 [2024-11-20 09:10:18.276365] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.481 [2024-11-20 09:10:18.276372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.481 [2024-11-20 09:10:18.276387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.481 qpair failed and we were unable to recover it. 00:26:02.481 [2024-11-20 09:10:18.286309] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.481 [2024-11-20 09:10:18.286366] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.481 [2024-11-20 09:10:18.286379] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.481 [2024-11-20 09:10:18.286385] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.481 [2024-11-20 09:10:18.286391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.481 [2024-11-20 09:10:18.286405] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.481 qpair failed and we were unable to recover it. 00:26:02.481 [2024-11-20 09:10:18.296431] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.481 [2024-11-20 09:10:18.296491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.481 [2024-11-20 09:10:18.296504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.481 [2024-11-20 09:10:18.296511] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.481 [2024-11-20 09:10:18.296517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.481 [2024-11-20 09:10:18.296532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.481 qpair failed and we were unable to recover it. 00:26:02.481 [2024-11-20 09:10:18.306311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.481 [2024-11-20 09:10:18.306368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.481 [2024-11-20 09:10:18.306381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.481 [2024-11-20 09:10:18.306388] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.481 [2024-11-20 09:10:18.306394] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.481 [2024-11-20 09:10:18.306412] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.481 qpair failed and we were unable to recover it. 00:26:02.481 [2024-11-20 09:10:18.316421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.481 [2024-11-20 09:10:18.316475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.481 [2024-11-20 09:10:18.316487] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.481 [2024-11-20 09:10:18.316494] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.481 [2024-11-20 09:10:18.316500] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.481 [2024-11-20 09:10:18.316514] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.481 qpair failed and we were unable to recover it. 00:26:02.481 [2024-11-20 09:10:18.326463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.481 [2024-11-20 09:10:18.326533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.481 [2024-11-20 09:10:18.326547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.481 [2024-11-20 09:10:18.326553] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.481 [2024-11-20 09:10:18.326560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.481 [2024-11-20 09:10:18.326574] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.481 qpair failed and we were unable to recover it. 00:26:02.481 [2024-11-20 09:10:18.336499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.481 [2024-11-20 09:10:18.336562] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.481 [2024-11-20 09:10:18.336576] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.481 [2024-11-20 09:10:18.336582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.481 [2024-11-20 09:10:18.336588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.482 [2024-11-20 09:10:18.336603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.482 qpair failed and we were unable to recover it. 00:26:02.482 [2024-11-20 09:10:18.346523] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.482 [2024-11-20 09:10:18.346581] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.482 [2024-11-20 09:10:18.346593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.482 [2024-11-20 09:10:18.346600] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.482 [2024-11-20 09:10:18.346606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.482 [2024-11-20 09:10:18.346620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.482 qpair failed and we were unable to recover it. 00:26:02.482 [2024-11-20 09:10:18.356473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.482 [2024-11-20 09:10:18.356535] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.482 [2024-11-20 09:10:18.356548] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.482 [2024-11-20 09:10:18.356555] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.482 [2024-11-20 09:10:18.356561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.482 [2024-11-20 09:10:18.356576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.482 qpair failed and we were unable to recover it. 00:26:02.482 [2024-11-20 09:10:18.366545] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.482 [2024-11-20 09:10:18.366604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.482 [2024-11-20 09:10:18.366618] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.482 [2024-11-20 09:10:18.366624] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.482 [2024-11-20 09:10:18.366630] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.482 [2024-11-20 09:10:18.366645] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.482 qpair failed and we were unable to recover it. 00:26:02.482 [2024-11-20 09:10:18.376528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.482 [2024-11-20 09:10:18.376579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.482 [2024-11-20 09:10:18.376593] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.482 [2024-11-20 09:10:18.376599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.482 [2024-11-20 09:10:18.376605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.482 [2024-11-20 09:10:18.376619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.482 qpair failed and we were unable to recover it. 00:26:02.482 [2024-11-20 09:10:18.386553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.482 [2024-11-20 09:10:18.386620] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.482 [2024-11-20 09:10:18.386633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.482 [2024-11-20 09:10:18.386639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.482 [2024-11-20 09:10:18.386645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.482 [2024-11-20 09:10:18.386659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.482 qpair failed and we were unable to recover it. 00:26:02.482 [2024-11-20 09:10:18.396671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.482 [2024-11-20 09:10:18.396763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.482 [2024-11-20 09:10:18.396780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.482 [2024-11-20 09:10:18.396786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.482 [2024-11-20 09:10:18.396792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.482 [2024-11-20 09:10:18.396807] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.482 qpair failed and we were unable to recover it. 00:26:02.482 [2024-11-20 09:10:18.406677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.482 [2024-11-20 09:10:18.406752] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.482 [2024-11-20 09:10:18.406765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.482 [2024-11-20 09:10:18.406772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.482 [2024-11-20 09:10:18.406777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.482 [2024-11-20 09:10:18.406792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.482 qpair failed and we were unable to recover it. 00:26:02.482 [2024-11-20 09:10:18.416745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.482 [2024-11-20 09:10:18.416855] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.482 [2024-11-20 09:10:18.416868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.482 [2024-11-20 09:10:18.416875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.482 [2024-11-20 09:10:18.416881] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.482 [2024-11-20 09:10:18.416896] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.482 qpair failed and we were unable to recover it. 00:26:02.482 [2024-11-20 09:10:18.426726] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.482 [2024-11-20 09:10:18.426780] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.482 [2024-11-20 09:10:18.426793] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.482 [2024-11-20 09:10:18.426800] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.482 [2024-11-20 09:10:18.426806] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.482 [2024-11-20 09:10:18.426821] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.482 qpair failed and we were unable to recover it. 00:26:02.482 [2024-11-20 09:10:18.436691] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.482 [2024-11-20 09:10:18.436745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.482 [2024-11-20 09:10:18.436759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.482 [2024-11-20 09:10:18.436765] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.482 [2024-11-20 09:10:18.436772] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.482 [2024-11-20 09:10:18.436790] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.482 qpair failed and we were unable to recover it. 00:26:02.482 [2024-11-20 09:10:18.446773] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.482 [2024-11-20 09:10:18.446831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.482 [2024-11-20 09:10:18.446845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.482 [2024-11-20 09:10:18.446851] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.482 [2024-11-20 09:10:18.446857] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.482 [2024-11-20 09:10:18.446871] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.482 qpair failed and we were unable to recover it. 00:26:02.483 [2024-11-20 09:10:18.456827] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.483 [2024-11-20 09:10:18.456883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.483 [2024-11-20 09:10:18.456896] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.483 [2024-11-20 09:10:18.456903] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.483 [2024-11-20 09:10:18.456909] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.483 [2024-11-20 09:10:18.456924] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.483 qpair failed and we were unable to recover it. 00:26:02.483 [2024-11-20 09:10:18.466813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.483 [2024-11-20 09:10:18.466871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.483 [2024-11-20 09:10:18.466884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.483 [2024-11-20 09:10:18.466891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.483 [2024-11-20 09:10:18.466897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.483 [2024-11-20 09:10:18.466912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.483 qpair failed and we were unable to recover it. 00:26:02.483 [2024-11-20 09:10:18.476849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.483 [2024-11-20 09:10:18.476905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.483 [2024-11-20 09:10:18.476917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.483 [2024-11-20 09:10:18.476924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.483 [2024-11-20 09:10:18.476930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.483 [2024-11-20 09:10:18.476945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.483 qpair failed and we were unable to recover it. 00:26:02.483 [2024-11-20 09:10:18.486897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.483 [2024-11-20 09:10:18.486957] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.483 [2024-11-20 09:10:18.486971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.483 [2024-11-20 09:10:18.486977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.483 [2024-11-20 09:10:18.486983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.483 [2024-11-20 09:10:18.486999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.483 qpair failed and we were unable to recover it. 00:26:02.483 [2024-11-20 09:10:18.496922] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.483 [2024-11-20 09:10:18.496982] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.483 [2024-11-20 09:10:18.496996] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.483 [2024-11-20 09:10:18.497002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.483 [2024-11-20 09:10:18.497008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.483 [2024-11-20 09:10:18.497023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.483 qpair failed and we were unable to recover it. 00:26:02.483 [2024-11-20 09:10:18.506945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.483 [2024-11-20 09:10:18.507010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.483 [2024-11-20 09:10:18.507023] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.483 [2024-11-20 09:10:18.507029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.483 [2024-11-20 09:10:18.507035] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.483 [2024-11-20 09:10:18.507050] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.483 qpair failed and we were unable to recover it. 00:26:02.483 [2024-11-20 09:10:18.517001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.483 [2024-11-20 09:10:18.517061] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.483 [2024-11-20 09:10:18.517076] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.483 [2024-11-20 09:10:18.517083] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.483 [2024-11-20 09:10:18.517089] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.483 [2024-11-20 09:10:18.517104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.483 qpair failed and we were unable to recover it. 00:26:02.743 [2024-11-20 09:10:18.527039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.743 [2024-11-20 09:10:18.527097] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.743 [2024-11-20 09:10:18.527116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.743 [2024-11-20 09:10:18.527123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.743 [2024-11-20 09:10:18.527129] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.743 [2024-11-20 09:10:18.527144] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.743 qpair failed and we were unable to recover it. 00:26:02.743 [2024-11-20 09:10:18.537075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.743 [2024-11-20 09:10:18.537154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.743 [2024-11-20 09:10:18.537168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.743 [2024-11-20 09:10:18.537175] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.743 [2024-11-20 09:10:18.537181] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.743 [2024-11-20 09:10:18.537195] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.743 qpair failed and we were unable to recover it. 00:26:02.743 [2024-11-20 09:10:18.547083] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.743 [2024-11-20 09:10:18.547140] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.743 [2024-11-20 09:10:18.547154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.743 [2024-11-20 09:10:18.547161] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.743 [2024-11-20 09:10:18.547167] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.743 [2024-11-20 09:10:18.547182] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.743 qpair failed and we were unable to recover it. 00:26:02.743 [2024-11-20 09:10:18.557145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.743 [2024-11-20 09:10:18.557206] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.743 [2024-11-20 09:10:18.557220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.743 [2024-11-20 09:10:18.557227] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.743 [2024-11-20 09:10:18.557233] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.743 [2024-11-20 09:10:18.557248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.743 qpair failed and we were unable to recover it. 00:26:02.743 [2024-11-20 09:10:18.567171] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.743 [2024-11-20 09:10:18.567243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.743 [2024-11-20 09:10:18.567256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.743 [2024-11-20 09:10:18.567263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.743 [2024-11-20 09:10:18.567272] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.743 [2024-11-20 09:10:18.567287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.743 qpair failed and we were unable to recover it. 00:26:02.743 [2024-11-20 09:10:18.577172] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.743 [2024-11-20 09:10:18.577227] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.743 [2024-11-20 09:10:18.577240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.743 [2024-11-20 09:10:18.577247] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.743 [2024-11-20 09:10:18.577253] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.743 [2024-11-20 09:10:18.577268] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.743 qpair failed and we were unable to recover it. 00:26:02.743 [2024-11-20 09:10:18.587242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.743 [2024-11-20 09:10:18.587294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.743 [2024-11-20 09:10:18.587307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.743 [2024-11-20 09:10:18.587313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.743 [2024-11-20 09:10:18.587319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.743 [2024-11-20 09:10:18.587334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.743 qpair failed and we were unable to recover it. 00:26:02.743 [2024-11-20 09:10:18.597268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.743 [2024-11-20 09:10:18.597328] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.743 [2024-11-20 09:10:18.597341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.743 [2024-11-20 09:10:18.597347] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.743 [2024-11-20 09:10:18.597353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.743 [2024-11-20 09:10:18.597368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.743 qpair failed and we were unable to recover it. 00:26:02.743 [2024-11-20 09:10:18.607318] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.743 [2024-11-20 09:10:18.607416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.743 [2024-11-20 09:10:18.607429] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.743 [2024-11-20 09:10:18.607435] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.743 [2024-11-20 09:10:18.607441] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.743 [2024-11-20 09:10:18.607456] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.743 qpair failed and we were unable to recover it. 00:26:02.743 [2024-11-20 09:10:18.617274] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.743 [2024-11-20 09:10:18.617326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.743 [2024-11-20 09:10:18.617339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.743 [2024-11-20 09:10:18.617345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.743 [2024-11-20 09:10:18.617351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.743 [2024-11-20 09:10:18.617366] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.743 qpair failed and we were unable to recover it. 00:26:02.743 [2024-11-20 09:10:18.627354] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.743 [2024-11-20 09:10:18.627414] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.743 [2024-11-20 09:10:18.627427] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.743 [2024-11-20 09:10:18.627434] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.744 [2024-11-20 09:10:18.627440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.744 [2024-11-20 09:10:18.627454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.744 qpair failed and we were unable to recover it. 00:26:02.744 [2024-11-20 09:10:18.637368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.744 [2024-11-20 09:10:18.637419] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.744 [2024-11-20 09:10:18.637431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.744 [2024-11-20 09:10:18.637437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.744 [2024-11-20 09:10:18.637443] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.744 [2024-11-20 09:10:18.637458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.744 qpair failed and we were unable to recover it. 00:26:02.744 [2024-11-20 09:10:18.647379] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.744 [2024-11-20 09:10:18.647436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.744 [2024-11-20 09:10:18.647449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.744 [2024-11-20 09:10:18.647455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.744 [2024-11-20 09:10:18.647461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.744 [2024-11-20 09:10:18.647476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.744 qpair failed and we were unable to recover it. 00:26:02.744 [2024-11-20 09:10:18.657401] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.744 [2024-11-20 09:10:18.657498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.744 [2024-11-20 09:10:18.657515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.744 [2024-11-20 09:10:18.657522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.744 [2024-11-20 09:10:18.657528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.744 [2024-11-20 09:10:18.657543] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.744 qpair failed and we were unable to recover it. 00:26:02.744 [2024-11-20 09:10:18.667427] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.744 [2024-11-20 09:10:18.667481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.744 [2024-11-20 09:10:18.667494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.744 [2024-11-20 09:10:18.667500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.744 [2024-11-20 09:10:18.667506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.744 [2024-11-20 09:10:18.667521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.744 qpair failed and we were unable to recover it. 00:26:02.744 [2024-11-20 09:10:18.677457] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.744 [2024-11-20 09:10:18.677508] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.744 [2024-11-20 09:10:18.677521] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.744 [2024-11-20 09:10:18.677527] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.744 [2024-11-20 09:10:18.677534] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.744 [2024-11-20 09:10:18.677548] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.744 qpair failed and we were unable to recover it. 00:26:02.744 [2024-11-20 09:10:18.687486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.744 [2024-11-20 09:10:18.687542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.744 [2024-11-20 09:10:18.687555] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.744 [2024-11-20 09:10:18.687562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.744 [2024-11-20 09:10:18.687568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.744 [2024-11-20 09:10:18.687583] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.744 qpair failed and we were unable to recover it. 00:26:02.744 [2024-11-20 09:10:18.697502] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.744 [2024-11-20 09:10:18.697554] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.744 [2024-11-20 09:10:18.697567] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.744 [2024-11-20 09:10:18.697579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.744 [2024-11-20 09:10:18.697585] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.744 [2024-11-20 09:10:18.697600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.744 qpair failed and we were unable to recover it. 00:26:02.744 [2024-11-20 09:10:18.707595] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.744 [2024-11-20 09:10:18.707654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.744 [2024-11-20 09:10:18.707667] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.744 [2024-11-20 09:10:18.707673] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.744 [2024-11-20 09:10:18.707679] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.744 [2024-11-20 09:10:18.707694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.744 qpair failed and we were unable to recover it. 00:26:02.744 [2024-11-20 09:10:18.717570] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.744 [2024-11-20 09:10:18.717622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.744 [2024-11-20 09:10:18.717635] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.744 [2024-11-20 09:10:18.717641] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.744 [2024-11-20 09:10:18.717647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.744 [2024-11-20 09:10:18.717662] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.744 qpair failed and we were unable to recover it. 00:26:02.744 [2024-11-20 09:10:18.727607] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.744 [2024-11-20 09:10:18.727666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.744 [2024-11-20 09:10:18.727679] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.744 [2024-11-20 09:10:18.727685] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.744 [2024-11-20 09:10:18.727691] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.744 [2024-11-20 09:10:18.727706] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.744 qpair failed and we were unable to recover it. 00:26:02.744 [2024-11-20 09:10:18.737636] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.744 [2024-11-20 09:10:18.737687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.744 [2024-11-20 09:10:18.737700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.744 [2024-11-20 09:10:18.737706] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.744 [2024-11-20 09:10:18.737712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.744 [2024-11-20 09:10:18.737727] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.744 qpair failed and we were unable to recover it. 00:26:02.744 [2024-11-20 09:10:18.747664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.744 [2024-11-20 09:10:18.747753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.744 [2024-11-20 09:10:18.747766] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.744 [2024-11-20 09:10:18.747772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.744 [2024-11-20 09:10:18.747778] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.744 [2024-11-20 09:10:18.747792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.744 qpair failed and we were unable to recover it. 00:26:02.744 [2024-11-20 09:10:18.757715] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.744 [2024-11-20 09:10:18.757770] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.744 [2024-11-20 09:10:18.757784] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.744 [2024-11-20 09:10:18.757790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.745 [2024-11-20 09:10:18.757796] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.745 [2024-11-20 09:10:18.757811] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.745 qpair failed and we were unable to recover it. 00:26:02.745 [2024-11-20 09:10:18.767747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.745 [2024-11-20 09:10:18.767824] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.745 [2024-11-20 09:10:18.767838] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.745 [2024-11-20 09:10:18.767845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.745 [2024-11-20 09:10:18.767851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.745 [2024-11-20 09:10:18.767866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.745 qpair failed and we were unable to recover it. 00:26:02.745 [2024-11-20 09:10:18.777763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:02.745 [2024-11-20 09:10:18.777822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:02.745 [2024-11-20 09:10:18.777839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:02.745 [2024-11-20 09:10:18.777847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:02.745 [2024-11-20 09:10:18.777853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:02.745 [2024-11-20 09:10:18.777868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:02.745 qpair failed and we were unable to recover it. 00:26:03.004 [2024-11-20 09:10:18.787724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.004 [2024-11-20 09:10:18.787831] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.004 [2024-11-20 09:10:18.787847] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.004 [2024-11-20 09:10:18.787854] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.004 [2024-11-20 09:10:18.787861] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.004 [2024-11-20 09:10:18.787878] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.004 qpair failed and we were unable to recover it. 00:26:03.004 [2024-11-20 09:10:18.797803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.004 [2024-11-20 09:10:18.797889] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.004 [2024-11-20 09:10:18.797904] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.004 [2024-11-20 09:10:18.797910] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.004 [2024-11-20 09:10:18.797916] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.004 [2024-11-20 09:10:18.797931] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.004 qpair failed and we were unable to recover it. 00:26:03.004 [2024-11-20 09:10:18.807861] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.004 [2024-11-20 09:10:18.807921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.004 [2024-11-20 09:10:18.807935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.004 [2024-11-20 09:10:18.807941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.004 [2024-11-20 09:10:18.807951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.004 [2024-11-20 09:10:18.807967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.004 qpair failed and we were unable to recover it. 00:26:03.004 [2024-11-20 09:10:18.817858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.004 [2024-11-20 09:10:18.817917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.004 [2024-11-20 09:10:18.817931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.004 [2024-11-20 09:10:18.817937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.004 [2024-11-20 09:10:18.817943] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.004 [2024-11-20 09:10:18.817962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.004 qpair failed and we were unable to recover it. 00:26:03.004 [2024-11-20 09:10:18.827892] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.004 [2024-11-20 09:10:18.827944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.004 [2024-11-20 09:10:18.827962] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.004 [2024-11-20 09:10:18.827973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.004 [2024-11-20 09:10:18.827979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.004 [2024-11-20 09:10:18.827995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.004 qpair failed and we were unable to recover it. 00:26:03.004 [2024-11-20 09:10:18.837945] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.004 [2024-11-20 09:10:18.838006] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.004 [2024-11-20 09:10:18.838020] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.004 [2024-11-20 09:10:18.838026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.004 [2024-11-20 09:10:18.838032] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.004 [2024-11-20 09:10:18.838047] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.004 qpair failed and we were unable to recover it. 00:26:03.004 [2024-11-20 09:10:18.847936] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.004 [2024-11-20 09:10:18.848002] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.004 [2024-11-20 09:10:18.848024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.004 [2024-11-20 09:10:18.848031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.004 [2024-11-20 09:10:18.848037] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.004 [2024-11-20 09:10:18.848057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.004 qpair failed and we were unable to recover it. 00:26:03.004 [2024-11-20 09:10:18.857969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.004 [2024-11-20 09:10:18.858027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.004 [2024-11-20 09:10:18.858041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.004 [2024-11-20 09:10:18.858048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.004 [2024-11-20 09:10:18.858054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.004 [2024-11-20 09:10:18.858069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.004 qpair failed and we were unable to recover it. 00:26:03.004 [2024-11-20 09:10:18.868039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.004 [2024-11-20 09:10:18.868089] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.004 [2024-11-20 09:10:18.868103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.004 [2024-11-20 09:10:18.868110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.004 [2024-11-20 09:10:18.868117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.004 [2024-11-20 09:10:18.868135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.004 qpair failed and we were unable to recover it. 00:26:03.004 [2024-11-20 09:10:18.878027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.004 [2024-11-20 09:10:18.878081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.005 [2024-11-20 09:10:18.878094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.005 [2024-11-20 09:10:18.878101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.005 [2024-11-20 09:10:18.878107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.005 [2024-11-20 09:10:18.878122] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-11-20 09:10:18.888055] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.005 [2024-11-20 09:10:18.888113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.005 [2024-11-20 09:10:18.888126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.005 [2024-11-20 09:10:18.888133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.005 [2024-11-20 09:10:18.888139] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.005 [2024-11-20 09:10:18.888153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-11-20 09:10:18.898020] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.005 [2024-11-20 09:10:18.898077] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.005 [2024-11-20 09:10:18.898090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.005 [2024-11-20 09:10:18.898096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.005 [2024-11-20 09:10:18.898102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.005 [2024-11-20 09:10:18.898117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-11-20 09:10:18.908123] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.005 [2024-11-20 09:10:18.908180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.005 [2024-11-20 09:10:18.908193] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.005 [2024-11-20 09:10:18.908200] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.005 [2024-11-20 09:10:18.908206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.005 [2024-11-20 09:10:18.908220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-11-20 09:10:18.918143] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.005 [2024-11-20 09:10:18.918195] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.005 [2024-11-20 09:10:18.918208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.005 [2024-11-20 09:10:18.918215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.005 [2024-11-20 09:10:18.918221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.005 [2024-11-20 09:10:18.918235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-11-20 09:10:18.928178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.005 [2024-11-20 09:10:18.928231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.005 [2024-11-20 09:10:18.928245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.005 [2024-11-20 09:10:18.928252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.005 [2024-11-20 09:10:18.928258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.005 [2024-11-20 09:10:18.928272] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-11-20 09:10:18.938212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.005 [2024-11-20 09:10:18.938269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.005 [2024-11-20 09:10:18.938281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.005 [2024-11-20 09:10:18.938288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.005 [2024-11-20 09:10:18.938294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.005 [2024-11-20 09:10:18.938310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-11-20 09:10:18.948229] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.005 [2024-11-20 09:10:18.948282] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.005 [2024-11-20 09:10:18.948295] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.005 [2024-11-20 09:10:18.948302] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.005 [2024-11-20 09:10:18.948308] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.005 [2024-11-20 09:10:18.948323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-11-20 09:10:18.958251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.005 [2024-11-20 09:10:18.958304] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.005 [2024-11-20 09:10:18.958320] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.005 [2024-11-20 09:10:18.958327] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.005 [2024-11-20 09:10:18.958333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.005 [2024-11-20 09:10:18.958347] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-11-20 09:10:18.968291] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.005 [2024-11-20 09:10:18.968347] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.005 [2024-11-20 09:10:18.968360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.005 [2024-11-20 09:10:18.968366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.005 [2024-11-20 09:10:18.968372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.005 [2024-11-20 09:10:18.968387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-11-20 09:10:18.978325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.005 [2024-11-20 09:10:18.978379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.005 [2024-11-20 09:10:18.978392] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.005 [2024-11-20 09:10:18.978398] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.005 [2024-11-20 09:10:18.978405] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.005 [2024-11-20 09:10:18.978419] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-11-20 09:10:18.988342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.005 [2024-11-20 09:10:18.988394] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.005 [2024-11-20 09:10:18.988407] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.005 [2024-11-20 09:10:18.988413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.005 [2024-11-20 09:10:18.988419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.005 [2024-11-20 09:10:18.988434] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-11-20 09:10:18.998374] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.005 [2024-11-20 09:10:18.998430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.005 [2024-11-20 09:10:18.998443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.005 [2024-11-20 09:10:18.998450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.005 [2024-11-20 09:10:18.998456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.005 [2024-11-20 09:10:18.998474] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.005 qpair failed and we were unable to recover it. 00:26:03.005 [2024-11-20 09:10:19.008319] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.006 [2024-11-20 09:10:19.008374] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.006 [2024-11-20 09:10:19.008387] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.006 [2024-11-20 09:10:19.008393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.006 [2024-11-20 09:10:19.008399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.006 [2024-11-20 09:10:19.008413] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-11-20 09:10:19.018465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.006 [2024-11-20 09:10:19.018518] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.006 [2024-11-20 09:10:19.018531] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.006 [2024-11-20 09:10:19.018538] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.006 [2024-11-20 09:10:19.018544] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.006 [2024-11-20 09:10:19.018558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-11-20 09:10:19.028488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.006 [2024-11-20 09:10:19.028542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.006 [2024-11-20 09:10:19.028557] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.006 [2024-11-20 09:10:19.028564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.006 [2024-11-20 09:10:19.028570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.006 [2024-11-20 09:10:19.028584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.006 [2024-11-20 09:10:19.038519] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.006 [2024-11-20 09:10:19.038579] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.006 [2024-11-20 09:10:19.038595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.006 [2024-11-20 09:10:19.038602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.006 [2024-11-20 09:10:19.038608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.006 [2024-11-20 09:10:19.038624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.006 qpair failed and we were unable to recover it. 00:26:03.266 [2024-11-20 09:10:19.048561] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.266 [2024-11-20 09:10:19.048632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.266 [2024-11-20 09:10:19.048648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.266 [2024-11-20 09:10:19.048655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.266 [2024-11-20 09:10:19.048661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.266 [2024-11-20 09:10:19.048677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.266 qpair failed and we were unable to recover it. 00:26:03.266 [2024-11-20 09:10:19.058560] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.266 [2024-11-20 09:10:19.058616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.266 [2024-11-20 09:10:19.058630] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.266 [2024-11-20 09:10:19.058637] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.266 [2024-11-20 09:10:19.058643] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.266 [2024-11-20 09:10:19.058659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.266 qpair failed and we were unable to recover it. 00:26:03.266 [2024-11-20 09:10:19.068556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.266 [2024-11-20 09:10:19.068650] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.266 [2024-11-20 09:10:19.068663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.266 [2024-11-20 09:10:19.068670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.266 [2024-11-20 09:10:19.068676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.266 [2024-11-20 09:10:19.068691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.266 qpair failed and we were unable to recover it. 00:26:03.266 [2024-11-20 09:10:19.078615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.266 [2024-11-20 09:10:19.078671] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.266 [2024-11-20 09:10:19.078685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.266 [2024-11-20 09:10:19.078692] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.266 [2024-11-20 09:10:19.078699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.266 [2024-11-20 09:10:19.078713] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.266 qpair failed and we were unable to recover it. 00:26:03.266 [2024-11-20 09:10:19.088588] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.266 [2024-11-20 09:10:19.088645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.266 [2024-11-20 09:10:19.088662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.266 [2024-11-20 09:10:19.088668] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.266 [2024-11-20 09:10:19.088674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.266 [2024-11-20 09:10:19.088689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.266 qpair failed and we were unable to recover it. 00:26:03.266 [2024-11-20 09:10:19.098702] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.266 [2024-11-20 09:10:19.098757] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.266 [2024-11-20 09:10:19.098770] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.266 [2024-11-20 09:10:19.098777] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.266 [2024-11-20 09:10:19.098783] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.266 [2024-11-20 09:10:19.098798] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.266 qpair failed and we were unable to recover it. 00:26:03.266 [2024-11-20 09:10:19.108687] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.266 [2024-11-20 09:10:19.108742] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.266 [2024-11-20 09:10:19.108756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.266 [2024-11-20 09:10:19.108763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.266 [2024-11-20 09:10:19.108769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.266 [2024-11-20 09:10:19.108784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.266 qpair failed and we were unable to recover it. 00:26:03.266 [2024-11-20 09:10:19.118784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.266 [2024-11-20 09:10:19.118886] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.266 [2024-11-20 09:10:19.118900] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.266 [2024-11-20 09:10:19.118906] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.266 [2024-11-20 09:10:19.118912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.266 [2024-11-20 09:10:19.118927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.266 qpair failed and we were unable to recover it. 00:26:03.266 [2024-11-20 09:10:19.128728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.266 [2024-11-20 09:10:19.128818] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.266 [2024-11-20 09:10:19.128831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.266 [2024-11-20 09:10:19.128838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.266 [2024-11-20 09:10:19.128846] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.266 [2024-11-20 09:10:19.128861] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.266 qpair failed and we were unable to recover it. 00:26:03.266 [2024-11-20 09:10:19.138782] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.266 [2024-11-20 09:10:19.138839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.267 [2024-11-20 09:10:19.138852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.267 [2024-11-20 09:10:19.138859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.267 [2024-11-20 09:10:19.138865] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.267 [2024-11-20 09:10:19.138879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.267 qpair failed and we were unable to recover it. 00:26:03.267 [2024-11-20 09:10:19.148822] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.267 [2024-11-20 09:10:19.148879] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.267 [2024-11-20 09:10:19.148892] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.267 [2024-11-20 09:10:19.148898] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.267 [2024-11-20 09:10:19.148905] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.267 [2024-11-20 09:10:19.148919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.267 qpair failed and we were unable to recover it. 00:26:03.267 [2024-11-20 09:10:19.158853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.267 [2024-11-20 09:10:19.158904] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.267 [2024-11-20 09:10:19.158917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.267 [2024-11-20 09:10:19.158923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.267 [2024-11-20 09:10:19.158930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.267 [2024-11-20 09:10:19.158944] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.267 qpair failed and we were unable to recover it. 00:26:03.267 [2024-11-20 09:10:19.168905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.267 [2024-11-20 09:10:19.168968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.267 [2024-11-20 09:10:19.168982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.267 [2024-11-20 09:10:19.168988] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.267 [2024-11-20 09:10:19.168994] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.267 [2024-11-20 09:10:19.169008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.267 qpair failed and we were unable to recover it. 00:26:03.267 [2024-11-20 09:10:19.178911] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.267 [2024-11-20 09:10:19.178968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.267 [2024-11-20 09:10:19.178982] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.267 [2024-11-20 09:10:19.178989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.267 [2024-11-20 09:10:19.178995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.267 [2024-11-20 09:10:19.179010] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.267 qpair failed and we were unable to recover it. 00:26:03.267 [2024-11-20 09:10:19.188958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.267 [2024-11-20 09:10:19.189008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.267 [2024-11-20 09:10:19.189022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.267 [2024-11-20 09:10:19.189028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.267 [2024-11-20 09:10:19.189034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.267 [2024-11-20 09:10:19.189049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.267 qpair failed and we were unable to recover it. 00:26:03.267 [2024-11-20 09:10:19.198987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.267 [2024-11-20 09:10:19.199044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.267 [2024-11-20 09:10:19.199058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.267 [2024-11-20 09:10:19.199065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.267 [2024-11-20 09:10:19.199071] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.267 [2024-11-20 09:10:19.199087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.267 qpair failed and we were unable to recover it. 00:26:03.267 [2024-11-20 09:10:19.209010] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.267 [2024-11-20 09:10:19.209081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.267 [2024-11-20 09:10:19.209094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.267 [2024-11-20 09:10:19.209100] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.267 [2024-11-20 09:10:19.209106] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.267 [2024-11-20 09:10:19.209121] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.267 qpair failed and we were unable to recover it. 00:26:03.267 [2024-11-20 09:10:19.219013] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.267 [2024-11-20 09:10:19.219074] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.267 [2024-11-20 09:10:19.219090] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.267 [2024-11-20 09:10:19.219096] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.267 [2024-11-20 09:10:19.219102] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.267 [2024-11-20 09:10:19.219117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.267 qpair failed and we were unable to recover it. 00:26:03.267 [2024-11-20 09:10:19.229048] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.267 [2024-11-20 09:10:19.229101] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.267 [2024-11-20 09:10:19.229114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.267 [2024-11-20 09:10:19.229121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.267 [2024-11-20 09:10:19.229127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.267 [2024-11-20 09:10:19.229142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.267 qpair failed and we were unable to recover it. 00:26:03.267 [2024-11-20 09:10:19.239070] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.267 [2024-11-20 09:10:19.239123] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.267 [2024-11-20 09:10:19.239137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.267 [2024-11-20 09:10:19.239143] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.267 [2024-11-20 09:10:19.239149] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.267 [2024-11-20 09:10:19.239164] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.267 qpair failed and we were unable to recover it. 00:26:03.267 [2024-11-20 09:10:19.249113] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.267 [2024-11-20 09:10:19.249171] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.267 [2024-11-20 09:10:19.249185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.267 [2024-11-20 09:10:19.249192] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.267 [2024-11-20 09:10:19.249198] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.267 [2024-11-20 09:10:19.249212] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.267 qpair failed and we were unable to recover it. 00:26:03.267 [2024-11-20 09:10:19.259144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.267 [2024-11-20 09:10:19.259197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.267 [2024-11-20 09:10:19.259211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.267 [2024-11-20 09:10:19.259220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.267 [2024-11-20 09:10:19.259227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.267 [2024-11-20 09:10:19.259241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.267 qpair failed and we were unable to recover it. 00:26:03.267 [2024-11-20 09:10:19.269173] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.267 [2024-11-20 09:10:19.269225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.268 [2024-11-20 09:10:19.269238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.268 [2024-11-20 09:10:19.269245] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.268 [2024-11-20 09:10:19.269251] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.268 [2024-11-20 09:10:19.269265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.268 qpair failed and we were unable to recover it. 00:26:03.268 [2024-11-20 09:10:19.279211] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.268 [2024-11-20 09:10:19.279267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.268 [2024-11-20 09:10:19.279282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.268 [2024-11-20 09:10:19.279288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.268 [2024-11-20 09:10:19.279295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.268 [2024-11-20 09:10:19.279309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.268 qpair failed and we were unable to recover it. 00:26:03.268 [2024-11-20 09:10:19.289239] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.268 [2024-11-20 09:10:19.289293] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.268 [2024-11-20 09:10:19.289306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.268 [2024-11-20 09:10:19.289313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.268 [2024-11-20 09:10:19.289319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.268 [2024-11-20 09:10:19.289333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.268 qpair failed and we were unable to recover it. 00:26:03.268 [2024-11-20 09:10:19.299244] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.268 [2024-11-20 09:10:19.299300] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.268 [2024-11-20 09:10:19.299315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.268 [2024-11-20 09:10:19.299322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.268 [2024-11-20 09:10:19.299328] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.268 [2024-11-20 09:10:19.299344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.268 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-20 09:10:19.309311] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.527 [2024-11-20 09:10:19.309389] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.527 [2024-11-20 09:10:19.309405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.527 [2024-11-20 09:10:19.309412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.527 [2024-11-20 09:10:19.309417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.527 [2024-11-20 09:10:19.309433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-20 09:10:19.319328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.527 [2024-11-20 09:10:19.319385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.527 [2024-11-20 09:10:19.319400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.527 [2024-11-20 09:10:19.319407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.527 [2024-11-20 09:10:19.319413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.527 [2024-11-20 09:10:19.319428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-20 09:10:19.329372] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.527 [2024-11-20 09:10:19.329430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.527 [2024-11-20 09:10:19.329444] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.527 [2024-11-20 09:10:19.329450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.527 [2024-11-20 09:10:19.329456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.527 [2024-11-20 09:10:19.329471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-20 09:10:19.339378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.527 [2024-11-20 09:10:19.339437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.527 [2024-11-20 09:10:19.339450] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.527 [2024-11-20 09:10:19.339458] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.527 [2024-11-20 09:10:19.339464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.527 [2024-11-20 09:10:19.339479] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-20 09:10:19.349410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.527 [2024-11-20 09:10:19.349467] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.527 [2024-11-20 09:10:19.349481] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.527 [2024-11-20 09:10:19.349487] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.527 [2024-11-20 09:10:19.349493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.527 [2024-11-20 09:10:19.349507] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-20 09:10:19.359450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.527 [2024-11-20 09:10:19.359534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.527 [2024-11-20 09:10:19.359547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.527 [2024-11-20 09:10:19.359554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.527 [2024-11-20 09:10:19.359560] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.527 [2024-11-20 09:10:19.359575] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-20 09:10:19.369468] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.527 [2024-11-20 09:10:19.369524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.527 [2024-11-20 09:10:19.369537] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.527 [2024-11-20 09:10:19.369544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.527 [2024-11-20 09:10:19.369550] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.527 [2024-11-20 09:10:19.369564] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-20 09:10:19.379487] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.527 [2024-11-20 09:10:19.379544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.527 [2024-11-20 09:10:19.379558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.527 [2024-11-20 09:10:19.379564] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.527 [2024-11-20 09:10:19.379570] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.527 [2024-11-20 09:10:19.379585] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.527 qpair failed and we were unable to recover it. 00:26:03.527 [2024-11-20 09:10:19.389571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.527 [2024-11-20 09:10:19.389627] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.528 [2024-11-20 09:10:19.389640] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.528 [2024-11-20 09:10:19.389649] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.528 [2024-11-20 09:10:19.389655] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.528 [2024-11-20 09:10:19.389669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-20 09:10:19.399535] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.528 [2024-11-20 09:10:19.399586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.528 [2024-11-20 09:10:19.399599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.528 [2024-11-20 09:10:19.399606] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.528 [2024-11-20 09:10:19.399612] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.528 [2024-11-20 09:10:19.399626] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-20 09:10:19.409571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.528 [2024-11-20 09:10:19.409628] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.528 [2024-11-20 09:10:19.409641] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.528 [2024-11-20 09:10:19.409648] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.528 [2024-11-20 09:10:19.409653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.528 [2024-11-20 09:10:19.409668] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-20 09:10:19.419598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.528 [2024-11-20 09:10:19.419666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.528 [2024-11-20 09:10:19.419680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.528 [2024-11-20 09:10:19.419686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.528 [2024-11-20 09:10:19.419692] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.528 [2024-11-20 09:10:19.419707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-20 09:10:19.429618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.528 [2024-11-20 09:10:19.429667] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.528 [2024-11-20 09:10:19.429680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.528 [2024-11-20 09:10:19.429686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.528 [2024-11-20 09:10:19.429693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.528 [2024-11-20 09:10:19.429711] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-20 09:10:19.439651] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.528 [2024-11-20 09:10:19.439731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.528 [2024-11-20 09:10:19.439745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.528 [2024-11-20 09:10:19.439751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.528 [2024-11-20 09:10:19.439757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.528 [2024-11-20 09:10:19.439772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-20 09:10:19.449703] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.528 [2024-11-20 09:10:19.449760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.528 [2024-11-20 09:10:19.449773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.528 [2024-11-20 09:10:19.449780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.528 [2024-11-20 09:10:19.449786] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.528 [2024-11-20 09:10:19.449801] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-20 09:10:19.459633] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.528 [2024-11-20 09:10:19.459690] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.528 [2024-11-20 09:10:19.459704] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.528 [2024-11-20 09:10:19.459710] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.528 [2024-11-20 09:10:19.459717] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.528 [2024-11-20 09:10:19.459731] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.528 qpair failed and we were unable to recover it. 00:26:03.528 [2024-11-20 09:10:19.469758] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.528 [2024-11-20 09:10:19.469811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.528 [2024-11-20 09:10:19.469825] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.528 [2024-11-20 09:10:19.469831] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.528 [2024-11-20 09:10:19.469837] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.529 [2024-11-20 09:10:19.469852] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-20 09:10:19.479798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.529 [2024-11-20 09:10:19.479858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.529 [2024-11-20 09:10:19.479872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.529 [2024-11-20 09:10:19.479878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.529 [2024-11-20 09:10:19.479884] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.529 [2024-11-20 09:10:19.479900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-20 09:10:19.489850] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.529 [2024-11-20 09:10:19.489958] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.529 [2024-11-20 09:10:19.489971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.529 [2024-11-20 09:10:19.489978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.529 [2024-11-20 09:10:19.489984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.529 [2024-11-20 09:10:19.489999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-20 09:10:19.499824] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.529 [2024-11-20 09:10:19.499881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.529 [2024-11-20 09:10:19.499894] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.529 [2024-11-20 09:10:19.499901] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.529 [2024-11-20 09:10:19.499907] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.529 [2024-11-20 09:10:19.499921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-20 09:10:19.509856] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.529 [2024-11-20 09:10:19.509927] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.529 [2024-11-20 09:10:19.509941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.529 [2024-11-20 09:10:19.509950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.529 [2024-11-20 09:10:19.509957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.529 [2024-11-20 09:10:19.509972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-20 09:10:19.519882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.529 [2024-11-20 09:10:19.519934] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.529 [2024-11-20 09:10:19.519956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.529 [2024-11-20 09:10:19.519963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.529 [2024-11-20 09:10:19.519969] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.529 [2024-11-20 09:10:19.519983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-20 09:10:19.529853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.529 [2024-11-20 09:10:19.529910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.529 [2024-11-20 09:10:19.529923] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.529 [2024-11-20 09:10:19.529930] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.529 [2024-11-20 09:10:19.529936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.529 [2024-11-20 09:10:19.529956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-20 09:10:19.539929] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.529 [2024-11-20 09:10:19.539989] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.529 [2024-11-20 09:10:19.540003] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.529 [2024-11-20 09:10:19.540009] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.529 [2024-11-20 09:10:19.540015] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.529 [2024-11-20 09:10:19.540030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-20 09:10:19.549902] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.529 [2024-11-20 09:10:19.549963] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.529 [2024-11-20 09:10:19.549977] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.529 [2024-11-20 09:10:19.549984] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.529 [2024-11-20 09:10:19.549990] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.529 [2024-11-20 09:10:19.550005] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.529 qpair failed and we were unable to recover it. 00:26:03.529 [2024-11-20 09:10:19.559981] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.529 [2024-11-20 09:10:19.560039] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.529 [2024-11-20 09:10:19.560051] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.530 [2024-11-20 09:10:19.560058] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.530 [2024-11-20 09:10:19.560067] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.530 [2024-11-20 09:10:19.560082] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.530 qpair failed and we were unable to recover it. 00:26:03.788 [2024-11-20 09:10:19.570062] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.788 [2024-11-20 09:10:19.570128] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.788 [2024-11-20 09:10:19.570144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.788 [2024-11-20 09:10:19.570152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.788 [2024-11-20 09:10:19.570157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.788 [2024-11-20 09:10:19.570174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.788 qpair failed and we were unable to recover it. 00:26:03.788 [2024-11-20 09:10:19.580018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.788 [2024-11-20 09:10:19.580080] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.788 [2024-11-20 09:10:19.580095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.788 [2024-11-20 09:10:19.580102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.788 [2024-11-20 09:10:19.580108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.788 [2024-11-20 09:10:19.580124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.788 qpair failed and we were unable to recover it. 00:26:03.788 [2024-11-20 09:10:19.590091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.788 [2024-11-20 09:10:19.590149] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.788 [2024-11-20 09:10:19.590163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.788 [2024-11-20 09:10:19.590169] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.788 [2024-11-20 09:10:19.590175] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.788 [2024-11-20 09:10:19.590190] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.788 qpair failed and we were unable to recover it. 00:26:03.788 [2024-11-20 09:10:19.600109] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.788 [2024-11-20 09:10:19.600164] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.788 [2024-11-20 09:10:19.600178] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.788 [2024-11-20 09:10:19.600185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.788 [2024-11-20 09:10:19.600191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.788 [2024-11-20 09:10:19.600206] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.788 qpair failed and we were unable to recover it. 00:26:03.788 [2024-11-20 09:10:19.610148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.788 [2024-11-20 09:10:19.610213] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.788 [2024-11-20 09:10:19.610227] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.788 [2024-11-20 09:10:19.610234] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.788 [2024-11-20 09:10:19.610239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.788 [2024-11-20 09:10:19.610254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.788 qpair failed and we were unable to recover it. 00:26:03.788 [2024-11-20 09:10:19.620174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.788 [2024-11-20 09:10:19.620233] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.788 [2024-11-20 09:10:19.620247] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.788 [2024-11-20 09:10:19.620254] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.788 [2024-11-20 09:10:19.620259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.788 [2024-11-20 09:10:19.620274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.788 qpair failed and we were unable to recover it. 00:26:03.788 [2024-11-20 09:10:19.630193] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.789 [2024-11-20 09:10:19.630245] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.789 [2024-11-20 09:10:19.630258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.789 [2024-11-20 09:10:19.630264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.789 [2024-11-20 09:10:19.630270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.789 [2024-11-20 09:10:19.630285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.789 qpair failed and we were unable to recover it. 00:26:03.789 [2024-11-20 09:10:19.640295] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.789 [2024-11-20 09:10:19.640346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.789 [2024-11-20 09:10:19.640359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.789 [2024-11-20 09:10:19.640366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.789 [2024-11-20 09:10:19.640372] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.789 [2024-11-20 09:10:19.640388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.789 qpair failed and we were unable to recover it. 00:26:03.789 [2024-11-20 09:10:19.650191] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.789 [2024-11-20 09:10:19.650247] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.789 [2024-11-20 09:10:19.650263] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.789 [2024-11-20 09:10:19.650270] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.789 [2024-11-20 09:10:19.650276] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.789 [2024-11-20 09:10:19.650290] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.789 qpair failed and we were unable to recover it. 00:26:03.789 [2024-11-20 09:10:19.660307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.789 [2024-11-20 09:10:19.660375] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.789 [2024-11-20 09:10:19.660389] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.789 [2024-11-20 09:10:19.660395] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.789 [2024-11-20 09:10:19.660401] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.789 [2024-11-20 09:10:19.660416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.789 qpair failed and we were unable to recover it. 00:26:03.789 [2024-11-20 09:10:19.670308] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.789 [2024-11-20 09:10:19.670368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.789 [2024-11-20 09:10:19.670381] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.789 [2024-11-20 09:10:19.670387] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.789 [2024-11-20 09:10:19.670393] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.789 [2024-11-20 09:10:19.670408] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.789 qpair failed and we were unable to recover it. 00:26:03.789 [2024-11-20 09:10:19.680270] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.789 [2024-11-20 09:10:19.680322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.789 [2024-11-20 09:10:19.680336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.789 [2024-11-20 09:10:19.680342] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.789 [2024-11-20 09:10:19.680348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.789 [2024-11-20 09:10:19.680363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.789 qpair failed and we were unable to recover it. 00:26:03.789 [2024-11-20 09:10:19.690353] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.789 [2024-11-20 09:10:19.690410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.789 [2024-11-20 09:10:19.690422] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.789 [2024-11-20 09:10:19.690429] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.789 [2024-11-20 09:10:19.690438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.789 [2024-11-20 09:10:19.690452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.789 qpair failed and we were unable to recover it. 00:26:03.789 [2024-11-20 09:10:19.700336] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.789 [2024-11-20 09:10:19.700392] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.789 [2024-11-20 09:10:19.700405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.789 [2024-11-20 09:10:19.700412] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.789 [2024-11-20 09:10:19.700419] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.789 [2024-11-20 09:10:19.700433] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.789 qpair failed and we were unable to recover it. 00:26:03.789 [2024-11-20 09:10:19.710410] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.789 [2024-11-20 09:10:19.710464] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.789 [2024-11-20 09:10:19.710477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.789 [2024-11-20 09:10:19.710484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.789 [2024-11-20 09:10:19.710490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.789 [2024-11-20 09:10:19.710505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.789 qpair failed and we were unable to recover it. 00:26:03.789 [2024-11-20 09:10:19.720470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.789 [2024-11-20 09:10:19.720523] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.789 [2024-11-20 09:10:19.720536] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.789 [2024-11-20 09:10:19.720543] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.789 [2024-11-20 09:10:19.720548] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.789 [2024-11-20 09:10:19.720563] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.789 qpair failed and we were unable to recover it. 00:26:03.789 [2024-11-20 09:10:19.730516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.789 [2024-11-20 09:10:19.730618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.789 [2024-11-20 09:10:19.730632] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.789 [2024-11-20 09:10:19.730638] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.789 [2024-11-20 09:10:19.730644] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.789 [2024-11-20 09:10:19.730659] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.789 qpair failed and we were unable to recover it. 00:26:03.789 [2024-11-20 09:10:19.740527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.789 [2024-11-20 09:10:19.740585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.789 [2024-11-20 09:10:19.740598] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.789 [2024-11-20 09:10:19.740604] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.789 [2024-11-20 09:10:19.740610] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.789 [2024-11-20 09:10:19.740624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.789 qpair failed and we were unable to recover it. 00:26:03.789 [2024-11-20 09:10:19.750542] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.789 [2024-11-20 09:10:19.750621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.789 [2024-11-20 09:10:19.750634] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.789 [2024-11-20 09:10:19.750640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.789 [2024-11-20 09:10:19.750646] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.789 [2024-11-20 09:10:19.750661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.789 qpair failed and we were unable to recover it. 00:26:03.789 [2024-11-20 09:10:19.760515] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.790 [2024-11-20 09:10:19.760571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.790 [2024-11-20 09:10:19.760585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.790 [2024-11-20 09:10:19.760592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.790 [2024-11-20 09:10:19.760597] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.790 [2024-11-20 09:10:19.760612] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.790 qpair failed and we were unable to recover it. 00:26:03.790 [2024-11-20 09:10:19.770600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.790 [2024-11-20 09:10:19.770666] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.790 [2024-11-20 09:10:19.770680] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.790 [2024-11-20 09:10:19.770686] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.790 [2024-11-20 09:10:19.770693] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.790 [2024-11-20 09:10:19.770707] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.790 qpair failed and we were unable to recover it. 00:26:03.790 [2024-11-20 09:10:19.780620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.790 [2024-11-20 09:10:19.780676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.790 [2024-11-20 09:10:19.780693] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.790 [2024-11-20 09:10:19.780700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.790 [2024-11-20 09:10:19.780706] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.790 [2024-11-20 09:10:19.780721] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.790 qpair failed and we were unable to recover it. 00:26:03.790 [2024-11-20 09:10:19.790674] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.790 [2024-11-20 09:10:19.790726] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.790 [2024-11-20 09:10:19.790740] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.790 [2024-11-20 09:10:19.790747] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.790 [2024-11-20 09:10:19.790753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.790 [2024-11-20 09:10:19.790768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.790 qpair failed and we were unable to recover it. 00:26:03.790 [2024-11-20 09:10:19.800617] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.790 [2024-11-20 09:10:19.800673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.790 [2024-11-20 09:10:19.800687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.790 [2024-11-20 09:10:19.800694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.790 [2024-11-20 09:10:19.800700] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.790 [2024-11-20 09:10:19.800715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.790 qpair failed and we were unable to recover it. 00:26:03.790 [2024-11-20 09:10:19.810731] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.790 [2024-11-20 09:10:19.810788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.790 [2024-11-20 09:10:19.810802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.790 [2024-11-20 09:10:19.810808] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.790 [2024-11-20 09:10:19.810814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.790 [2024-11-20 09:10:19.810829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.790 qpair failed and we were unable to recover it. 00:26:03.790 [2024-11-20 09:10:19.820811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:03.790 [2024-11-20 09:10:19.820867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:03.790 [2024-11-20 09:10:19.820881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:03.790 [2024-11-20 09:10:19.820890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:03.790 [2024-11-20 09:10:19.820896] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:03.790 [2024-11-20 09:10:19.820912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:03.790 qpair failed and we were unable to recover it. 00:26:04.049 [2024-11-20 09:10:19.830738] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.049 [2024-11-20 09:10:19.830794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.049 [2024-11-20 09:10:19.830811] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.049 [2024-11-20 09:10:19.830819] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.049 [2024-11-20 09:10:19.830825] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.049 [2024-11-20 09:10:19.830842] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.049 qpair failed and we were unable to recover it. 00:26:04.049 [2024-11-20 09:10:19.840751] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.049 [2024-11-20 09:10:19.840804] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.049 [2024-11-20 09:10:19.840819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.049 [2024-11-20 09:10:19.840826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.049 [2024-11-20 09:10:19.840832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.049 [2024-11-20 09:10:19.840848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.049 qpair failed and we were unable to recover it. 00:26:04.049 [2024-11-20 09:10:19.850845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.049 [2024-11-20 09:10:19.850931] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.049 [2024-11-20 09:10:19.850945] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.049 [2024-11-20 09:10:19.850957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.049 [2024-11-20 09:10:19.850963] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.049 [2024-11-20 09:10:19.850979] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.049 qpair failed and we were unable to recover it. 00:26:04.049 [2024-11-20 09:10:19.860805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.049 [2024-11-20 09:10:19.860865] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.049 [2024-11-20 09:10:19.860879] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.049 [2024-11-20 09:10:19.860886] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.049 [2024-11-20 09:10:19.860892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.049 [2024-11-20 09:10:19.860907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.049 qpair failed and we were unable to recover it. 00:26:04.049 [2024-11-20 09:10:19.870817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.049 [2024-11-20 09:10:19.870874] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.049 [2024-11-20 09:10:19.870887] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.049 [2024-11-20 09:10:19.870894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.049 [2024-11-20 09:10:19.870900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.049 [2024-11-20 09:10:19.870915] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.049 qpair failed and we were unable to recover it. 00:26:04.049 [2024-11-20 09:10:19.880879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.049 [2024-11-20 09:10:19.880930] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.049 [2024-11-20 09:10:19.880944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.049 [2024-11-20 09:10:19.880955] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.049 [2024-11-20 09:10:19.880962] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.049 [2024-11-20 09:10:19.880977] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.049 qpair failed and we were unable to recover it. 00:26:04.049 [2024-11-20 09:10:19.890958] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.049 [2024-11-20 09:10:19.891017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.049 [2024-11-20 09:10:19.891030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.049 [2024-11-20 09:10:19.891036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.049 [2024-11-20 09:10:19.891042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.049 [2024-11-20 09:10:19.891057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.049 qpair failed and we were unable to recover it. 00:26:04.049 [2024-11-20 09:10:19.900980] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.049 [2024-11-20 09:10:19.901035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.049 [2024-11-20 09:10:19.901049] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.049 [2024-11-20 09:10:19.901056] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.049 [2024-11-20 09:10:19.901062] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.049 [2024-11-20 09:10:19.901078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.049 qpair failed and we were unable to recover it. 00:26:04.049 [2024-11-20 09:10:19.911001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.049 [2024-11-20 09:10:19.911057] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.049 [2024-11-20 09:10:19.911070] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.049 [2024-11-20 09:10:19.911077] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.049 [2024-11-20 09:10:19.911083] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.049 [2024-11-20 09:10:19.911097] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.049 qpair failed and we were unable to recover it. 00:26:04.049 [2024-11-20 09:10:19.921027] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.049 [2024-11-20 09:10:19.921113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.049 [2024-11-20 09:10:19.921126] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.049 [2024-11-20 09:10:19.921133] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.049 [2024-11-20 09:10:19.921138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.049 [2024-11-20 09:10:19.921153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.049 qpair failed and we were unable to recover it. 00:26:04.049 [2024-11-20 09:10:19.931063] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.049 [2024-11-20 09:10:19.931121] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.049 [2024-11-20 09:10:19.931134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.049 [2024-11-20 09:10:19.931140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.049 [2024-11-20 09:10:19.931146] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.049 [2024-11-20 09:10:19.931161] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.049 qpair failed and we were unable to recover it. 00:26:04.049 [2024-11-20 09:10:19.941138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.049 [2024-11-20 09:10:19.941196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.049 [2024-11-20 09:10:19.941209] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.049 [2024-11-20 09:10:19.941216] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.049 [2024-11-20 09:10:19.941222] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.049 [2024-11-20 09:10:19.941236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.049 qpair failed and we were unable to recover it. 00:26:04.049 [2024-11-20 09:10:19.951121] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.049 [2024-11-20 09:10:19.951180] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.049 [2024-11-20 09:10:19.951192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.049 [2024-11-20 09:10:19.951202] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.049 [2024-11-20 09:10:19.951208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.049 [2024-11-20 09:10:19.951223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.049 qpair failed and we were unable to recover it. 00:26:04.049 [2024-11-20 09:10:19.961144] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.049 [2024-11-20 09:10:19.961194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.049 [2024-11-20 09:10:19.961208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.049 [2024-11-20 09:10:19.961215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.049 [2024-11-20 09:10:19.961221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.049 [2024-11-20 09:10:19.961236] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.049 qpair failed and we were unable to recover it. 00:26:04.049 [2024-11-20 09:10:19.971179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.049 [2024-11-20 09:10:19.971237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.049 [2024-11-20 09:10:19.971250] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.049 [2024-11-20 09:10:19.971257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.049 [2024-11-20 09:10:19.971263] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.049 [2024-11-20 09:10:19.971277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.049 qpair failed and we were unable to recover it. 00:26:04.049 [2024-11-20 09:10:19.981207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.049 [2024-11-20 09:10:19.981265] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.049 [2024-11-20 09:10:19.981279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.049 [2024-11-20 09:10:19.981285] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.049 [2024-11-20 09:10:19.981291] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.049 [2024-11-20 09:10:19.981305] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.049 qpair failed and we were unable to recover it. 00:26:04.049 [2024-11-20 09:10:19.991226] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.049 [2024-11-20 09:10:19.991280] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.049 [2024-11-20 09:10:19.991292] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.049 [2024-11-20 09:10:19.991299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.049 [2024-11-20 09:10:19.991305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.049 [2024-11-20 09:10:19.991323] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.049 qpair failed and we were unable to recover it. 00:26:04.049 [2024-11-20 09:10:20.001214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.049 [2024-11-20 09:10:20.001268] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.049 [2024-11-20 09:10:20.001282] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.049 [2024-11-20 09:10:20.001288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.050 [2024-11-20 09:10:20.001295] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.050 [2024-11-20 09:10:20.001310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.050 qpair failed and we were unable to recover it. 00:26:04.050 [2024-11-20 09:10:20.011361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.050 [2024-11-20 09:10:20.011420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.050 [2024-11-20 09:10:20.011435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.050 [2024-11-20 09:10:20.011441] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.050 [2024-11-20 09:10:20.011447] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.050 [2024-11-20 09:10:20.011462] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.050 qpair failed and we were unable to recover it. 00:26:04.050 [2024-11-20 09:10:20.021415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.050 [2024-11-20 09:10:20.021477] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.050 [2024-11-20 09:10:20.021490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.050 [2024-11-20 09:10:20.021497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.050 [2024-11-20 09:10:20.021503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.050 [2024-11-20 09:10:20.021518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.050 qpair failed and we were unable to recover it. 00:26:04.050 [2024-11-20 09:10:20.031405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.050 [2024-11-20 09:10:20.031476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.050 [2024-11-20 09:10:20.031490] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.050 [2024-11-20 09:10:20.031497] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.050 [2024-11-20 09:10:20.031503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.050 [2024-11-20 09:10:20.031518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.050 qpair failed and we were unable to recover it. 00:26:04.050 [2024-11-20 09:10:20.041402] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.050 [2024-11-20 09:10:20.041460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.050 [2024-11-20 09:10:20.041474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.050 [2024-11-20 09:10:20.041480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.050 [2024-11-20 09:10:20.041486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.050 [2024-11-20 09:10:20.041501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.050 qpair failed and we were unable to recover it. 00:26:04.050 [2024-11-20 09:10:20.051376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.050 [2024-11-20 09:10:20.051437] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.050 [2024-11-20 09:10:20.051451] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.050 [2024-11-20 09:10:20.051457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.050 [2024-11-20 09:10:20.051463] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.050 [2024-11-20 09:10:20.051478] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.050 qpair failed and we were unable to recover it. 00:26:04.050 [2024-11-20 09:10:20.061454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.050 [2024-11-20 09:10:20.061514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.050 [2024-11-20 09:10:20.061527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.050 [2024-11-20 09:10:20.061534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.050 [2024-11-20 09:10:20.061540] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.050 [2024-11-20 09:10:20.061555] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.050 qpair failed and we were unable to recover it. 00:26:04.050 [2024-11-20 09:10:20.071417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.050 [2024-11-20 09:10:20.071471] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.050 [2024-11-20 09:10:20.071484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.050 [2024-11-20 09:10:20.071491] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.050 [2024-11-20 09:10:20.071497] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.050 [2024-11-20 09:10:20.071512] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.050 qpair failed and we were unable to recover it. 00:26:04.050 [2024-11-20 09:10:20.081506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.050 [2024-11-20 09:10:20.081557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.050 [2024-11-20 09:10:20.081573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.050 [2024-11-20 09:10:20.081580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.050 [2024-11-20 09:10:20.081586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.050 [2024-11-20 09:10:20.081601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.050 qpair failed and we were unable to recover it. 00:26:04.309 [2024-11-20 09:10:20.091574] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.309 [2024-11-20 09:10:20.091655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.309 [2024-11-20 09:10:20.091672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.309 [2024-11-20 09:10:20.091680] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.309 [2024-11-20 09:10:20.091687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.309 [2024-11-20 09:10:20.091704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.309 qpair failed and we were unable to recover it. 00:26:04.309 [2024-11-20 09:10:20.101587] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.309 [2024-11-20 09:10:20.101649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.309 [2024-11-20 09:10:20.101663] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.309 [2024-11-20 09:10:20.101671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.309 [2024-11-20 09:10:20.101677] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.309 [2024-11-20 09:10:20.101693] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.309 qpair failed and we were unable to recover it. 00:26:04.309 [2024-11-20 09:10:20.111586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.309 [2024-11-20 09:10:20.111641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.309 [2024-11-20 09:10:20.111662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.309 [2024-11-20 09:10:20.111669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.309 [2024-11-20 09:10:20.111675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.309 [2024-11-20 09:10:20.111691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.309 qpair failed and we were unable to recover it. 00:26:04.309 [2024-11-20 09:10:20.121614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.309 [2024-11-20 09:10:20.121674] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.309 [2024-11-20 09:10:20.121687] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.309 [2024-11-20 09:10:20.121694] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.309 [2024-11-20 09:10:20.121703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.309 [2024-11-20 09:10:20.121719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.309 qpair failed and we were unable to recover it. 00:26:04.309 [2024-11-20 09:10:20.131644] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.309 [2024-11-20 09:10:20.131705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.309 [2024-11-20 09:10:20.131718] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.309 [2024-11-20 09:10:20.131725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.309 [2024-11-20 09:10:20.131731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.309 [2024-11-20 09:10:20.131746] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.309 qpair failed and we were unable to recover it. 00:26:04.309 [2024-11-20 09:10:20.141664] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.309 [2024-11-20 09:10:20.141722] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.309 [2024-11-20 09:10:20.141735] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.309 [2024-11-20 09:10:20.141742] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.309 [2024-11-20 09:10:20.141748] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.309 [2024-11-20 09:10:20.141763] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.309 qpair failed and we were unable to recover it. 00:26:04.309 [2024-11-20 09:10:20.151685] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.309 [2024-11-20 09:10:20.151743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.309 [2024-11-20 09:10:20.151756] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.309 [2024-11-20 09:10:20.151763] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.309 [2024-11-20 09:10:20.151769] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.309 [2024-11-20 09:10:20.151784] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.309 qpair failed and we were unable to recover it. 00:26:04.309 [2024-11-20 09:10:20.161754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.309 [2024-11-20 09:10:20.161805] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.309 [2024-11-20 09:10:20.161820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.309 [2024-11-20 09:10:20.161826] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.309 [2024-11-20 09:10:20.161832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.309 [2024-11-20 09:10:20.161847] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.309 qpair failed and we were unable to recover it. 00:26:04.309 [2024-11-20 09:10:20.171754] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.309 [2024-11-20 09:10:20.171813] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.309 [2024-11-20 09:10:20.171826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.309 [2024-11-20 09:10:20.171833] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.309 [2024-11-20 09:10:20.171839] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.309 [2024-11-20 09:10:20.171853] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.309 qpair failed and we were unable to recover it. 00:26:04.309 [2024-11-20 09:10:20.181790] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.309 [2024-11-20 09:10:20.181862] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.309 [2024-11-20 09:10:20.181875] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.309 [2024-11-20 09:10:20.181882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.309 [2024-11-20 09:10:20.181888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.309 [2024-11-20 09:10:20.181902] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.309 qpair failed and we were unable to recover it. 00:26:04.309 [2024-11-20 09:10:20.191851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.309 [2024-11-20 09:10:20.191908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.309 [2024-11-20 09:10:20.191921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.309 [2024-11-20 09:10:20.191928] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.309 [2024-11-20 09:10:20.191934] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.309 [2024-11-20 09:10:20.191952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.309 qpair failed and we were unable to recover it. 00:26:04.309 [2024-11-20 09:10:20.201846] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.309 [2024-11-20 09:10:20.201897] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.309 [2024-11-20 09:10:20.201911] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.310 [2024-11-20 09:10:20.201917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.310 [2024-11-20 09:10:20.201923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.310 [2024-11-20 09:10:20.201938] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.310 qpair failed and we were unable to recover it. 00:26:04.310 [2024-11-20 09:10:20.211880] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.310 [2024-11-20 09:10:20.211946] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.310 [2024-11-20 09:10:20.211966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.310 [2024-11-20 09:10:20.211972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.310 [2024-11-20 09:10:20.211978] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.310 [2024-11-20 09:10:20.211993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.310 qpair failed and we were unable to recover it. 00:26:04.310 [2024-11-20 09:10:20.221881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.310 [2024-11-20 09:10:20.221938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.310 [2024-11-20 09:10:20.221956] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.310 [2024-11-20 09:10:20.221963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.310 [2024-11-20 09:10:20.221968] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.310 [2024-11-20 09:10:20.221983] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.310 qpair failed and we were unable to recover it. 00:26:04.310 [2024-11-20 09:10:20.231924] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.310 [2024-11-20 09:10:20.231986] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.310 [2024-11-20 09:10:20.232000] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.310 [2024-11-20 09:10:20.232007] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.310 [2024-11-20 09:10:20.232014] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.310 [2024-11-20 09:10:20.232030] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.310 qpair failed and we were unable to recover it. 00:26:04.310 [2024-11-20 09:10:20.241963] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.310 [2024-11-20 09:10:20.242017] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.310 [2024-11-20 09:10:20.242030] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.310 [2024-11-20 09:10:20.242037] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.310 [2024-11-20 09:10:20.242043] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.310 [2024-11-20 09:10:20.242058] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.310 qpair failed and we were unable to recover it. 00:26:04.310 [2024-11-20 09:10:20.251967] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.310 [2024-11-20 09:10:20.252027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.310 [2024-11-20 09:10:20.252041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.310 [2024-11-20 09:10:20.252047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.310 [2024-11-20 09:10:20.252057] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.310 [2024-11-20 09:10:20.252072] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.310 qpair failed and we were unable to recover it. 00:26:04.310 [2024-11-20 09:10:20.262039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.310 [2024-11-20 09:10:20.262095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.310 [2024-11-20 09:10:20.262108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.310 [2024-11-20 09:10:20.262115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.310 [2024-11-20 09:10:20.262120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.310 [2024-11-20 09:10:20.262136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.310 qpair failed and we were unable to recover it. 00:26:04.310 [2024-11-20 09:10:20.272041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.310 [2024-11-20 09:10:20.272095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.310 [2024-11-20 09:10:20.272108] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.310 [2024-11-20 09:10:20.272115] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.310 [2024-11-20 09:10:20.272121] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.310 [2024-11-20 09:10:20.272135] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.310 qpair failed and we were unable to recover it. 00:26:04.310 [2024-11-20 09:10:20.282072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.310 [2024-11-20 09:10:20.282127] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.310 [2024-11-20 09:10:20.282142] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.310 [2024-11-20 09:10:20.282149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.310 [2024-11-20 09:10:20.282155] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.310 [2024-11-20 09:10:20.282170] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.310 qpair failed and we were unable to recover it. 00:26:04.310 [2024-11-20 09:10:20.292122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.310 [2024-11-20 09:10:20.292178] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.310 [2024-11-20 09:10:20.292192] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.310 [2024-11-20 09:10:20.292198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.310 [2024-11-20 09:10:20.292204] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.310 [2024-11-20 09:10:20.292219] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.310 qpair failed and we were unable to recover it. 00:26:04.310 [2024-11-20 09:10:20.302133] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.310 [2024-11-20 09:10:20.302192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.310 [2024-11-20 09:10:20.302205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.310 [2024-11-20 09:10:20.302212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.310 [2024-11-20 09:10:20.302218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.310 [2024-11-20 09:10:20.302233] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.310 qpair failed and we were unable to recover it. 00:26:04.310 [2024-11-20 09:10:20.312164] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.310 [2024-11-20 09:10:20.312222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.310 [2024-11-20 09:10:20.312236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.310 [2024-11-20 09:10:20.312242] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.310 [2024-11-20 09:10:20.312248] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.310 [2024-11-20 09:10:20.312263] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.310 qpair failed and we were unable to recover it. 00:26:04.310 [2024-11-20 09:10:20.322237] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.310 [2024-11-20 09:10:20.322294] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.310 [2024-11-20 09:10:20.322307] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.310 [2024-11-20 09:10:20.322314] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.310 [2024-11-20 09:10:20.322320] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.310 [2024-11-20 09:10:20.322335] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.310 qpair failed and we were unable to recover it. 00:26:04.310 [2024-11-20 09:10:20.332224] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.310 [2024-11-20 09:10:20.332281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.311 [2024-11-20 09:10:20.332294] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.311 [2024-11-20 09:10:20.332300] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.311 [2024-11-20 09:10:20.332306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.311 [2024-11-20 09:10:20.332321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.311 qpair failed and we were unable to recover it. 00:26:04.311 [2024-11-20 09:10:20.342252] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.311 [2024-11-20 09:10:20.342308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.311 [2024-11-20 09:10:20.342324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.311 [2024-11-20 09:10:20.342331] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.311 [2024-11-20 09:10:20.342337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.311 [2024-11-20 09:10:20.342351] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.311 qpair failed and we were unable to recover it. 00:26:04.571 [2024-11-20 09:10:20.352322] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.571 [2024-11-20 09:10:20.352390] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.571 [2024-11-20 09:10:20.352406] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.571 [2024-11-20 09:10:20.352413] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.571 [2024-11-20 09:10:20.352420] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.571 [2024-11-20 09:10:20.352437] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.571 qpair failed and we were unable to recover it. 00:26:04.571 [2024-11-20 09:10:20.362302] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.571 [2024-11-20 09:10:20.362356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.571 [2024-11-20 09:10:20.362370] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.571 [2024-11-20 09:10:20.362377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.571 [2024-11-20 09:10:20.362383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.571 [2024-11-20 09:10:20.362398] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.571 qpair failed and we were unable to recover it. 00:26:04.571 [2024-11-20 09:10:20.372303] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.571 [2024-11-20 09:10:20.372391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.571 [2024-11-20 09:10:20.372405] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.571 [2024-11-20 09:10:20.372411] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.571 [2024-11-20 09:10:20.372417] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.571 [2024-11-20 09:10:20.372432] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.571 qpair failed and we were unable to recover it. 00:26:04.571 [2024-11-20 09:10:20.382369] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.571 [2024-11-20 09:10:20.382421] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.571 [2024-11-20 09:10:20.382435] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.571 [2024-11-20 09:10:20.382444] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.571 [2024-11-20 09:10:20.382451] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.571 [2024-11-20 09:10:20.382466] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.571 qpair failed and we were unable to recover it. 00:26:04.571 [2024-11-20 09:10:20.392380] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.571 [2024-11-20 09:10:20.392436] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.571 [2024-11-20 09:10:20.392449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.571 [2024-11-20 09:10:20.392456] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.571 [2024-11-20 09:10:20.392462] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.571 [2024-11-20 09:10:20.392477] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.571 qpair failed and we were unable to recover it. 00:26:04.571 [2024-11-20 09:10:20.402426] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.571 [2024-11-20 09:10:20.402480] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.571 [2024-11-20 09:10:20.402494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.571 [2024-11-20 09:10:20.402500] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.571 [2024-11-20 09:10:20.402506] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.571 [2024-11-20 09:10:20.402521] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.571 qpair failed and we were unable to recover it. 00:26:04.571 [2024-11-20 09:10:20.412494] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.571 [2024-11-20 09:10:20.412551] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.571 [2024-11-20 09:10:20.412565] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.571 [2024-11-20 09:10:20.412571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.571 [2024-11-20 09:10:20.412578] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.571 [2024-11-20 09:10:20.412593] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.571 qpair failed and we were unable to recover it. 00:26:04.571 [2024-11-20 09:10:20.422470] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.571 [2024-11-20 09:10:20.422550] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.571 [2024-11-20 09:10:20.422564] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.571 [2024-11-20 09:10:20.422571] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.571 [2024-11-20 09:10:20.422577] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.571 [2024-11-20 09:10:20.422594] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.571 qpair failed and we were unable to recover it. 00:26:04.571 [2024-11-20 09:10:20.432496] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.571 [2024-11-20 09:10:20.432548] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.571 [2024-11-20 09:10:20.432561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.571 [2024-11-20 09:10:20.432567] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.571 [2024-11-20 09:10:20.432574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.571 [2024-11-20 09:10:20.432588] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.571 qpair failed and we were unable to recover it. 00:26:04.571 [2024-11-20 09:10:20.442579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.571 [2024-11-20 09:10:20.442634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.571 [2024-11-20 09:10:20.442647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.571 [2024-11-20 09:10:20.442654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.571 [2024-11-20 09:10:20.442660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.571 [2024-11-20 09:10:20.442675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.571 qpair failed and we were unable to recover it. 00:26:04.571 [2024-11-20 09:10:20.452559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.571 [2024-11-20 09:10:20.452613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.571 [2024-11-20 09:10:20.452627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.571 [2024-11-20 09:10:20.452633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.571 [2024-11-20 09:10:20.452640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.571 [2024-11-20 09:10:20.452654] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.571 qpair failed and we were unable to recover it. 00:26:04.571 [2024-11-20 09:10:20.462585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.571 [2024-11-20 09:10:20.462641] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.571 [2024-11-20 09:10:20.462654] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.571 [2024-11-20 09:10:20.462661] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.572 [2024-11-20 09:10:20.462667] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.572 [2024-11-20 09:10:20.462681] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.572 qpair failed and we were unable to recover it. 00:26:04.572 [2024-11-20 09:10:20.472652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.572 [2024-11-20 09:10:20.472716] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.572 [2024-11-20 09:10:20.472729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.572 [2024-11-20 09:10:20.472736] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.572 [2024-11-20 09:10:20.472741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.572 [2024-11-20 09:10:20.472757] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.572 qpair failed and we were unable to recover it. 00:26:04.572 [2024-11-20 09:10:20.482629] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.572 [2024-11-20 09:10:20.482687] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.572 [2024-11-20 09:10:20.482700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.572 [2024-11-20 09:10:20.482707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.572 [2024-11-20 09:10:20.482712] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.572 [2024-11-20 09:10:20.482728] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.572 qpair failed and we were unable to recover it. 00:26:04.572 [2024-11-20 09:10:20.492673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.572 [2024-11-20 09:10:20.492728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.572 [2024-11-20 09:10:20.492741] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.572 [2024-11-20 09:10:20.492748] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.572 [2024-11-20 09:10:20.492754] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.572 [2024-11-20 09:10:20.492768] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.572 qpair failed and we were unable to recover it. 00:26:04.572 [2024-11-20 09:10:20.502709] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.572 [2024-11-20 09:10:20.502759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.572 [2024-11-20 09:10:20.502772] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.572 [2024-11-20 09:10:20.502779] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.572 [2024-11-20 09:10:20.502784] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.572 [2024-11-20 09:10:20.502799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.572 qpair failed and we were unable to recover it. 00:26:04.572 [2024-11-20 09:10:20.512728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.572 [2024-11-20 09:10:20.512801] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.572 [2024-11-20 09:10:20.512814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.572 [2024-11-20 09:10:20.512823] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.572 [2024-11-20 09:10:20.512829] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.572 [2024-11-20 09:10:20.512844] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.572 qpair failed and we were unable to recover it. 00:26:04.572 [2024-11-20 09:10:20.522765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.572 [2024-11-20 09:10:20.522825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.572 [2024-11-20 09:10:20.522839] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.572 [2024-11-20 09:10:20.522845] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.572 [2024-11-20 09:10:20.522851] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.572 [2024-11-20 09:10:20.522866] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.572 qpair failed and we were unable to recover it. 00:26:04.572 [2024-11-20 09:10:20.532803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.572 [2024-11-20 09:10:20.532863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.572 [2024-11-20 09:10:20.532876] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.572 [2024-11-20 09:10:20.532883] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.572 [2024-11-20 09:10:20.532889] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c20000b90 00:26:04.572 [2024-11-20 09:10:20.532903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:04.572 qpair failed and we were unable to recover it. 00:26:04.572 [2024-11-20 09:10:20.542835] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.572 [2024-11-20 09:10:20.542933] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.572 [2024-11-20 09:10:20.543002] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.572 [2024-11-20 09:10:20.543027] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.572 [2024-11-20 09:10:20.543049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c1c000b90 00:26:04.572 [2024-11-20 09:10:20.543101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:04.572 qpair failed and we were unable to recover it. 00:26:04.572 [2024-11-20 09:10:20.552818] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.572 [2024-11-20 09:10:20.552905] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.572 [2024-11-20 09:10:20.552936] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.572 [2024-11-20 09:10:20.552971] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.572 [2024-11-20 09:10:20.552988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c1c000b90 00:26:04.572 [2024-11-20 09:10:20.553036] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:04.572 qpair failed and we were unable to recover it. 00:26:04.572 [2024-11-20 09:10:20.562904] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.572 [2024-11-20 09:10:20.563027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.572 [2024-11-20 09:10:20.563086] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.572 [2024-11-20 09:10:20.563112] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.572 [2024-11-20 09:10:20.563134] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c28000b90 00:26:04.572 [2024-11-20 09:10:20.563186] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:04.572 qpair failed and we were unable to recover it. 00:26:04.572 [2024-11-20 09:10:20.572906] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.572 [2024-11-20 09:10:20.572983] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.572 [2024-11-20 09:10:20.573011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.572 [2024-11-20 09:10:20.573026] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.572 [2024-11-20 09:10:20.573039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f1c28000b90 00:26:04.572 [2024-11-20 09:10:20.573070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:04.572 qpair failed and we were unable to recover it. 00:26:04.572 [2024-11-20 09:10:20.573248] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:26:04.572 A controller has encountered a failure and is being reset. 00:26:04.572 [2024-11-20 09:10:20.582976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.572 [2024-11-20 09:10:20.583095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.572 [2024-11-20 09:10:20.583153] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.572 [2024-11-20 09:10:20.583179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.572 [2024-11-20 09:10:20.583201] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b8bba0 00:26:04.572 [2024-11-20 09:10:20.583254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.573 qpair failed and we were unable to recover it. 00:26:04.573 [2024-11-20 09:10:20.592970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:04.573 [2024-11-20 09:10:20.593045] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:04.573 [2024-11-20 09:10:20.593075] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:04.573 [2024-11-20 09:10:20.593091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:04.573 [2024-11-20 09:10:20.593104] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x1b8bba0 00:26:04.573 [2024-11-20 09:10:20.593136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:04.573 qpair failed and we were unable to recover it. 00:26:04.831 Controller properly reset. 00:26:04.831 Initializing NVMe Controllers 00:26:04.831 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:04.831 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:04.831 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:04.831 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:04.831 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:04.831 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:04.831 Initialization complete. Launching workers. 00:26:04.831 Starting thread on core 1 00:26:04.831 Starting thread on core 2 00:26:04.831 Starting thread on core 3 00:26:04.831 Starting thread on core 0 00:26:04.831 09:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:26:04.831 00:26:04.831 real 0m11.592s 00:26:04.831 user 0m21.948s 00:26:04.831 sys 0m4.645s 00:26:04.831 09:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:04.831 09:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:04.831 ************************************ 00:26:04.831 END TEST nvmf_target_disconnect_tc2 00:26:04.831 ************************************ 00:26:04.831 09:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:26:04.831 09:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:04.831 09:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:26:04.831 09:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:04.831 09:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@99 -- # sync 00:26:04.831 09:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:04.831 09:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # set +e 00:26:04.831 09:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:04.831 09:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:04.831 rmmod nvme_tcp 00:26:04.831 rmmod nvme_fabrics 00:26:04.831 rmmod nvme_keyring 00:26:04.831 09:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:04.831 09:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # set -e 00:26:04.831 09:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # return 0 00:26:04.831 09:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # '[' -n 2480064 ']' 00:26:04.831 09:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@337 -- # killprocess 2480064 00:26:04.832 09:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 2480064 ']' 00:26:04.832 09:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 2480064 00:26:04.832 09:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:26:05.090 09:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:05.090 09:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2480064 00:26:05.090 09:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:26:05.090 09:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:26:05.090 09:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2480064' 00:26:05.090 killing process with pid 2480064 00:26:05.090 09:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 2480064 00:26:05.090 09:10:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 2480064 00:26:05.090 09:10:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:05.090 09:10:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # nvmf_fini 00:26:05.090 09:10:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@264 -- # local dev 00:26:05.090 09:10:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@267 -- # remove_target_ns 00:26:05.090 09:10:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:05.090 09:10:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:05.090 09:10:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@268 -- # delete_main_bridge 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@130 -- # return 0 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@41 -- # _dev=0 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@41 -- # dev_map=() 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@284 -- # iptr 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@542 -- # iptables-save 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@542 -- # iptables-restore 00:26:07.630 00:26:07.630 real 0m20.501s 00:26:07.630 user 0m50.406s 00:26:07.630 sys 0m9.648s 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:07.630 ************************************ 00:26:07.630 END TEST nvmf_target_disconnect 00:26:07.630 ************************************ 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@31 -- # [[ tcp == \t\c\p ]] 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.630 ************************************ 00:26:07.630 START TEST nvmf_digest 00:26:07.630 ************************************ 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:07.630 * Looking for test storage... 00:26:07.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:07.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.630 --rc genhtml_branch_coverage=1 00:26:07.630 --rc genhtml_function_coverage=1 00:26:07.630 --rc genhtml_legend=1 00:26:07.630 --rc geninfo_all_blocks=1 00:26:07.630 --rc geninfo_unexecuted_blocks=1 00:26:07.630 00:26:07.630 ' 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:07.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.630 --rc genhtml_branch_coverage=1 00:26:07.630 --rc genhtml_function_coverage=1 00:26:07.630 --rc genhtml_legend=1 00:26:07.630 --rc geninfo_all_blocks=1 00:26:07.630 --rc geninfo_unexecuted_blocks=1 00:26:07.630 00:26:07.630 ' 00:26:07.630 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:07.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.630 --rc genhtml_branch_coverage=1 00:26:07.630 --rc genhtml_function_coverage=1 00:26:07.630 --rc genhtml_legend=1 00:26:07.630 --rc geninfo_all_blocks=1 00:26:07.630 --rc geninfo_unexecuted_blocks=1 00:26:07.631 00:26:07.631 ' 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:07.631 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:07.631 --rc genhtml_branch_coverage=1 00:26:07.631 --rc genhtml_function_coverage=1 00:26:07.631 --rc genhtml_legend=1 00:26:07.631 --rc geninfo_all_blocks=1 00:26:07.631 --rc geninfo_unexecuted_blocks=1 00:26:07.631 00:26:07.631 ' 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@50 -- # : 0 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:07.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # prepare_net_devs 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # local -g is_hw=no 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # remove_target_ns 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # xtrace_disable 00:26:07.631 09:10:23 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:14.208 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:14.208 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@131 -- # pci_devs=() 00:26:14.208 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@131 -- # local -a pci_devs 00:26:14.208 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@132 -- # pci_net_devs=() 00:26:14.208 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:26:14.208 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@133 -- # pci_drivers=() 00:26:14.208 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@133 -- # local -A pci_drivers 00:26:14.208 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@135 -- # net_devs=() 00:26:14.208 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@135 -- # local -ga net_devs 00:26:14.208 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@136 -- # e810=() 00:26:14.208 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@136 -- # local -ga e810 00:26:14.208 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@137 -- # x722=() 00:26:14.208 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@137 -- # local -ga x722 00:26:14.208 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@138 -- # mlx=() 00:26:14.208 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@138 -- # local -ga mlx 00:26:14.208 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:14.208 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:14.208 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:14.208 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:14.208 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:14.208 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:14.208 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:14.209 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:14.209 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:14.209 Found net devices under 0000:86:00.0: cvl_0_0 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:14.209 Found net devices under 0000:86:00.1: cvl_0_1 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # is_hw=yes 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@257 -- # create_target_ns 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@28 -- # local -g _dev 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # ips=() 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772161 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:26:14.209 10.0.0.1 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772162 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:26:14.209 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:26:14.210 10.0.0.2 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@38 -- # ping_ips 1 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@107 -- # local dev=initiator0 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:14.210 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:14.210 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.420 ms 00:26:14.210 00:26:14.210 --- 10.0.0.1 ping statistics --- 00:26:14.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.210 rtt min/avg/max/mdev = 0.420/0.420/0.420/0.000 ms 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # get_net_dev target0 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@107 -- # local dev=target0 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:26:14.210 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:14.210 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:26:14.210 00:26:14.210 --- 10.0.0.2 ping statistics --- 00:26:14.210 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:14.210 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # (( pair++ )) 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # return 0 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@107 -- # local dev=initiator0 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@107 -- # local dev=initiator1 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # return 1 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # dev= 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@169 -- # return 0 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:14.210 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # get_net_dev target0 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@107 -- # local dev=target0 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # get_net_dev target1 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@107 -- # local dev=target1 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # return 1 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # dev= 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@169 -- # return 0 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:14.211 ************************************ 00:26:14.211 START TEST nvmf_digest_clean 00:26:14.211 ************************************ 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@328 -- # nvmfpid=2484578 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@329 -- # waitforlisten 2484578 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2484578 ']' 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:14.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:14.211 [2024-11-20 09:10:29.591667] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:26:14.211 [2024-11-20 09:10:29.591715] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:14.211 [2024-11-20 09:10:29.673238] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.211 [2024-11-20 09:10:29.715643] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:14.211 [2024-11-20 09:10:29.715683] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:14.211 [2024-11-20 09:10:29.715690] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:14.211 [2024-11-20 09:10:29.715696] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:14.211 [2024-11-20 09:10:29.715701] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:14.211 [2024-11-20 09:10:29.716270] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:14.211 null0 00:26:14.211 [2024-11-20 09:10:29.880723] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:14.211 [2024-11-20 09:10:29.904936] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2484774 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2484774 /var/tmp/bperf.sock 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2484774 ']' 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:14.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:14.211 09:10:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:14.211 [2024-11-20 09:10:29.956412] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:26:14.211 [2024-11-20 09:10:29.956455] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2484774 ] 00:26:14.211 [2024-11-20 09:10:30.031760] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.211 [2024-11-20 09:10:30.082145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.211 09:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:14.211 09:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:14.212 09:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:14.212 09:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:14.212 09:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:14.470 09:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.470 09:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:14.729 nvme0n1 00:26:14.729 09:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:14.729 09:10:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:14.988 Running I/O for 2 seconds... 00:26:16.860 24435.00 IOPS, 95.45 MiB/s [2024-11-20T08:10:32.901Z] 24402.50 IOPS, 95.32 MiB/s 00:26:16.860 Latency(us) 00:26:16.860 [2024-11-20T08:10:32.901Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:16.860 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:16.860 nvme0n1 : 2.00 24419.94 95.39 0.00 0.00 5236.45 2308.01 11910.46 00:26:16.860 [2024-11-20T08:10:32.901Z] =================================================================================================================== 00:26:16.860 [2024-11-20T08:10:32.901Z] Total : 24419.94 95.39 0.00 0.00 5236.45 2308.01 11910.46 00:26:16.860 { 00:26:16.860 "results": [ 00:26:16.860 { 00:26:16.860 "job": "nvme0n1", 00:26:16.860 "core_mask": "0x2", 00:26:16.860 "workload": "randread", 00:26:16.860 "status": "finished", 00:26:16.860 "queue_depth": 128, 00:26:16.860 "io_size": 4096, 00:26:16.860 "runtime": 2.004223, 00:26:16.860 "iops": 24419.937302386013, 00:26:16.860 "mibps": 95.39038008744537, 00:26:16.860 "io_failed": 0, 00:26:16.860 "io_timeout": 0, 00:26:16.860 "avg_latency_us": 5236.445392928242, 00:26:16.860 "min_latency_us": 2308.006956521739, 00:26:16.860 "max_latency_us": 11910.455652173912 00:26:16.860 } 00:26:16.860 ], 00:26:16.860 "core_count": 1 00:26:16.860 } 00:26:16.860 09:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:16.860 09:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:16.860 09:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:16.860 | select(.opcode=="crc32c") 00:26:16.860 | "\(.module_name) \(.executed)"' 00:26:16.860 09:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:16.860 09:10:32 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:17.119 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:17.119 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:17.119 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:17.119 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:17.119 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2484774 00:26:17.119 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2484774 ']' 00:26:17.119 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2484774 00:26:17.119 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:17.119 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:17.119 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2484774 00:26:17.119 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:17.119 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:17.119 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2484774' 00:26:17.119 killing process with pid 2484774 00:26:17.119 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2484774 00:26:17.119 Received shutdown signal, test time was about 2.000000 seconds 00:26:17.119 00:26:17.119 Latency(us) 00:26:17.119 [2024-11-20T08:10:33.160Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.119 [2024-11-20T08:10:33.160Z] =================================================================================================================== 00:26:17.119 [2024-11-20T08:10:33.160Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:17.119 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2484774 00:26:17.379 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:17.379 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:17.379 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:17.379 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:17.379 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:17.379 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:17.379 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:17.379 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2485282 00:26:17.379 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2485282 /var/tmp/bperf.sock 00:26:17.379 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:17.379 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2485282 ']' 00:26:17.379 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:17.379 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:17.379 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:17.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:17.379 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:17.379 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:17.379 [2024-11-20 09:10:33.342622] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:26:17.379 [2024-11-20 09:10:33.342669] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2485282 ] 00:26:17.379 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:17.379 Zero copy mechanism will not be used. 00:26:17.379 [2024-11-20 09:10:33.417026] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:17.638 [2024-11-20 09:10:33.454943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:17.638 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:17.638 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:17.638 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:17.638 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:17.638 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:17.897 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:17.897 09:10:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:18.156 nvme0n1 00:26:18.156 09:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:18.156 09:10:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:18.415 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:18.415 Zero copy mechanism will not be used. 00:26:18.415 Running I/O for 2 seconds... 00:26:20.291 5884.00 IOPS, 735.50 MiB/s [2024-11-20T08:10:36.332Z] 5826.00 IOPS, 728.25 MiB/s 00:26:20.291 Latency(us) 00:26:20.291 [2024-11-20T08:10:36.332Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.291 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:20.291 nvme0n1 : 2.00 5828.78 728.60 0.00 0.00 2742.30 637.55 5983.72 00:26:20.291 [2024-11-20T08:10:36.332Z] =================================================================================================================== 00:26:20.291 [2024-11-20T08:10:36.332Z] Total : 5828.78 728.60 0.00 0.00 2742.30 637.55 5983.72 00:26:20.291 { 00:26:20.291 "results": [ 00:26:20.291 { 00:26:20.291 "job": "nvme0n1", 00:26:20.291 "core_mask": "0x2", 00:26:20.291 "workload": "randread", 00:26:20.291 "status": "finished", 00:26:20.291 "queue_depth": 16, 00:26:20.291 "io_size": 131072, 00:26:20.291 "runtime": 2.001791, 00:26:20.291 "iops": 5828.780327216978, 00:26:20.291 "mibps": 728.5975409021222, 00:26:20.291 "io_failed": 0, 00:26:20.291 "io_timeout": 0, 00:26:20.291 "avg_latency_us": 2742.3031667436767, 00:26:20.291 "min_latency_us": 637.5513043478261, 00:26:20.291 "max_latency_us": 5983.721739130435 00:26:20.291 } 00:26:20.291 ], 00:26:20.291 "core_count": 1 00:26:20.291 } 00:26:20.291 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:20.291 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:20.291 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:20.291 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:20.291 | select(.opcode=="crc32c") 00:26:20.291 | "\(.module_name) \(.executed)"' 00:26:20.291 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:20.596 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:20.596 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:20.596 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:20.596 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:20.596 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2485282 00:26:20.596 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2485282 ']' 00:26:20.596 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2485282 00:26:20.596 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:20.596 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:20.596 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2485282 00:26:20.596 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:20.596 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:20.596 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2485282' 00:26:20.596 killing process with pid 2485282 00:26:20.596 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2485282 00:26:20.596 Received shutdown signal, test time was about 2.000000 seconds 00:26:20.596 00:26:20.596 Latency(us) 00:26:20.596 [2024-11-20T08:10:36.637Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:20.596 [2024-11-20T08:10:36.637Z] =================================================================================================================== 00:26:20.596 [2024-11-20T08:10:36.637Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:20.596 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2485282 00:26:20.881 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:20.881 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:20.881 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:20.881 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:20.881 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:20.881 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:20.881 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:20.881 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2485789 00:26:20.881 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2485789 /var/tmp/bperf.sock 00:26:20.881 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:20.881 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2485789 ']' 00:26:20.881 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:20.881 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:20.881 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:20.881 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:20.881 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:20.881 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:20.881 [2024-11-20 09:10:36.737956] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:26:20.881 [2024-11-20 09:10:36.738007] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2485789 ] 00:26:20.881 [2024-11-20 09:10:36.813225] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:20.881 [2024-11-20 09:10:36.856212] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:20.881 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:20.881 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:20.881 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:20.881 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:20.881 09:10:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:21.139 09:10:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:21.139 09:10:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:21.706 nvme0n1 00:26:21.706 09:10:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:21.706 09:10:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:21.706 Running I/O for 2 seconds... 00:26:24.021 26677.00 IOPS, 104.21 MiB/s [2024-11-20T08:10:40.062Z] 26806.50 IOPS, 104.71 MiB/s 00:26:24.021 Latency(us) 00:26:24.021 [2024-11-20T08:10:40.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.021 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:24.021 nvme0n1 : 2.01 26806.74 104.71 0.00 0.00 4766.14 3618.73 12366.36 00:26:24.021 [2024-11-20T08:10:40.062Z] =================================================================================================================== 00:26:24.021 [2024-11-20T08:10:40.062Z] Total : 26806.74 104.71 0.00 0.00 4766.14 3618.73 12366.36 00:26:24.021 { 00:26:24.021 "results": [ 00:26:24.021 { 00:26:24.021 "job": "nvme0n1", 00:26:24.021 "core_mask": "0x2", 00:26:24.021 "workload": "randwrite", 00:26:24.021 "status": "finished", 00:26:24.021 "queue_depth": 128, 00:26:24.021 "io_size": 4096, 00:26:24.021 "runtime": 2.005951, 00:26:24.021 "iops": 26806.73655537947, 00:26:24.021 "mibps": 104.71381466945105, 00:26:24.021 "io_failed": 0, 00:26:24.021 "io_timeout": 0, 00:26:24.021 "avg_latency_us": 4766.136716616308, 00:26:24.021 "min_latency_us": 3618.7269565217393, 00:26:24.021 "max_latency_us": 12366.358260869565 00:26:24.021 } 00:26:24.021 ], 00:26:24.021 "core_count": 1 00:26:24.021 } 00:26:24.021 09:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:24.021 09:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:24.021 09:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:24.021 09:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:24.021 | select(.opcode=="crc32c") 00:26:24.021 | "\(.module_name) \(.executed)"' 00:26:24.021 09:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:24.021 09:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:24.021 09:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:24.021 09:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:24.021 09:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:24.021 09:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2485789 00:26:24.021 09:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2485789 ']' 00:26:24.021 09:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2485789 00:26:24.021 09:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:24.021 09:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:24.021 09:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2485789 00:26:24.021 09:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:24.021 09:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:24.021 09:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2485789' 00:26:24.021 killing process with pid 2485789 00:26:24.021 09:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2485789 00:26:24.021 Received shutdown signal, test time was about 2.000000 seconds 00:26:24.021 00:26:24.021 Latency(us) 00:26:24.021 [2024-11-20T08:10:40.062Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:24.021 [2024-11-20T08:10:40.062Z] =================================================================================================================== 00:26:24.021 [2024-11-20T08:10:40.062Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:24.021 09:10:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2485789 00:26:24.280 09:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:24.280 09:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:24.280 09:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:24.280 09:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:24.280 09:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:24.280 09:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:24.280 09:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:24.280 09:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2486447 00:26:24.280 09:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2486447 /var/tmp/bperf.sock 00:26:24.280 09:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:24.280 09:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 2486447 ']' 00:26:24.280 09:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:24.280 09:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:24.280 09:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:24.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:24.280 09:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:24.280 09:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:24.280 [2024-11-20 09:10:40.165303] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:26:24.280 [2024-11-20 09:10:40.165354] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2486447 ] 00:26:24.280 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:24.280 Zero copy mechanism will not be used. 00:26:24.280 [2024-11-20 09:10:40.242803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:24.280 [2024-11-20 09:10:40.281538] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:24.539 09:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:24.539 09:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:24.539 09:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:24.539 09:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:24.539 09:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:24.798 09:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:24.798 09:10:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:25.056 nvme0n1 00:26:25.056 09:10:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:25.056 09:10:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:25.315 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:25.315 Zero copy mechanism will not be used. 00:26:25.315 Running I/O for 2 seconds... 00:26:27.189 6406.00 IOPS, 800.75 MiB/s [2024-11-20T08:10:43.230Z] 6712.50 IOPS, 839.06 MiB/s 00:26:27.189 Latency(us) 00:26:27.189 [2024-11-20T08:10:43.230Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.189 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:27.189 nvme0n1 : 2.00 6710.98 838.87 0.00 0.00 2380.24 1210.99 4245.59 00:26:27.189 [2024-11-20T08:10:43.230Z] =================================================================================================================== 00:26:27.189 [2024-11-20T08:10:43.230Z] Total : 6710.98 838.87 0.00 0.00 2380.24 1210.99 4245.59 00:26:27.189 { 00:26:27.189 "results": [ 00:26:27.189 { 00:26:27.189 "job": "nvme0n1", 00:26:27.189 "core_mask": "0x2", 00:26:27.189 "workload": "randwrite", 00:26:27.189 "status": "finished", 00:26:27.189 "queue_depth": 16, 00:26:27.189 "io_size": 131072, 00:26:27.189 "runtime": 2.003285, 00:26:27.189 "iops": 6710.977219916287, 00:26:27.189 "mibps": 838.8721524895359, 00:26:27.189 "io_failed": 0, 00:26:27.189 "io_timeout": 0, 00:26:27.189 "avg_latency_us": 2380.243445144432, 00:26:27.189 "min_latency_us": 1210.9913043478261, 00:26:27.189 "max_latency_us": 4245.593043478261 00:26:27.189 } 00:26:27.189 ], 00:26:27.189 "core_count": 1 00:26:27.189 } 00:26:27.189 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:27.189 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:27.189 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:27.189 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:27.189 | select(.opcode=="crc32c") 00:26:27.189 | "\(.module_name) \(.executed)"' 00:26:27.189 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:27.448 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:27.448 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:27.448 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:27.449 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:27.449 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2486447 00:26:27.449 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2486447 ']' 00:26:27.449 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2486447 00:26:27.449 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:27.449 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:27.449 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2486447 00:26:27.449 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:27.449 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:27.449 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2486447' 00:26:27.449 killing process with pid 2486447 00:26:27.449 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2486447 00:26:27.449 Received shutdown signal, test time was about 2.000000 seconds 00:26:27.449 00:26:27.449 Latency(us) 00:26:27.449 [2024-11-20T08:10:43.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:27.449 [2024-11-20T08:10:43.490Z] =================================================================================================================== 00:26:27.449 [2024-11-20T08:10:43.490Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:27.449 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2486447 00:26:27.708 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2484578 00:26:27.708 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 2484578 ']' 00:26:27.708 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 2484578 00:26:27.708 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:27.708 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:27.708 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2484578 00:26:27.708 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:27.708 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:27.708 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2484578' 00:26:27.708 killing process with pid 2484578 00:26:27.708 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 2484578 00:26:27.708 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 2484578 00:26:27.968 00:26:27.968 real 0m14.221s 00:26:27.968 user 0m27.127s 00:26:27.968 sys 0m4.703s 00:26:27.968 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:27.968 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:27.968 ************************************ 00:26:27.968 END TEST nvmf_digest_clean 00:26:27.968 ************************************ 00:26:27.968 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:27.968 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:27.968 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:27.968 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:27.968 ************************************ 00:26:27.968 START TEST nvmf_digest_error 00:26:27.968 ************************************ 00:26:27.968 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:26:27.968 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:27.968 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:27.968 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:27.968 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:27.968 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@328 -- # nvmfpid=2486964 00:26:27.968 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@329 -- # waitforlisten 2486964 00:26:27.968 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:27.968 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2486964 ']' 00:26:27.968 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:27.968 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:27.968 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:27.968 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:27.968 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:27.968 09:10:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:27.968 [2024-11-20 09:10:43.880613] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:26:27.968 [2024-11-20 09:10:43.880660] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:27.968 [2024-11-20 09:10:43.962793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:27.968 [2024-11-20 09:10:44.001500] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:27.968 [2024-11-20 09:10:44.001537] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:27.968 [2024-11-20 09:10:44.001545] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:27.968 [2024-11-20 09:10:44.001551] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:27.968 [2024-11-20 09:10:44.001556] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:27.968 [2024-11-20 09:10:44.002170] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:28.228 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:28.228 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:28.228 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:28.228 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:28.228 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:28.228 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:28.228 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:28.228 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.228 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:28.228 [2024-11-20 09:10:44.086651] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:28.228 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.228 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:28.228 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:28.228 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.228 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:28.228 null0 00:26:28.228 [2024-11-20 09:10:44.177851] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:28.228 [2024-11-20 09:10:44.202070] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:28.228 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.228 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:28.228 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:28.228 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:28.228 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:28.228 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:28.228 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2487186 00:26:28.228 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2487186 /var/tmp/bperf.sock 00:26:28.228 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:28.228 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2487186 ']' 00:26:28.228 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:28.228 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:28.228 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:28.228 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:28.228 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:28.228 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:28.228 [2024-11-20 09:10:44.254427] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:26:28.228 [2024-11-20 09:10:44.254469] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2487186 ] 00:26:28.487 [2024-11-20 09:10:44.328353] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:28.487 [2024-11-20 09:10:44.369920] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:28.487 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:28.487 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:28.487 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:28.487 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:28.746 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:28.746 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:28.746 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:28.746 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:28.746 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:28.746 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:29.005 nvme0n1 00:26:29.005 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:29.005 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:29.005 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:29.005 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:29.005 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:29.005 09:10:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:29.264 Running I/O for 2 seconds... 00:26:29.264 [2024-11-20 09:10:45.094356] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.264 [2024-11-20 09:10:45.094390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:21557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.264 [2024-11-20 09:10:45.094400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.264 [2024-11-20 09:10:45.105404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.264 [2024-11-20 09:10:45.105429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:23384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.264 [2024-11-20 09:10:45.105438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.264 [2024-11-20 09:10:45.114461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.264 [2024-11-20 09:10:45.114483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:14529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.264 [2024-11-20 09:10:45.114492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.264 [2024-11-20 09:10:45.126905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.264 [2024-11-20 09:10:45.126927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:23500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.264 [2024-11-20 09:10:45.126936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.264 [2024-11-20 09:10:45.136681] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.264 [2024-11-20 09:10:45.136701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:20386 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.264 [2024-11-20 09:10:45.136710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.264 [2024-11-20 09:10:45.146164] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.264 [2024-11-20 09:10:45.146185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:5566 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.264 [2024-11-20 09:10:45.146193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.264 [2024-11-20 09:10:45.155537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.264 [2024-11-20 09:10:45.155557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:1860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.264 [2024-11-20 09:10:45.155566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.264 [2024-11-20 09:10:45.164552] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.264 [2024-11-20 09:10:45.164573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:4893 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-11-20 09:10:45.164581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.265 [2024-11-20 09:10:45.175714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.265 [2024-11-20 09:10:45.175734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:25000 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-11-20 09:10:45.175743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.265 [2024-11-20 09:10:45.184529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.265 [2024-11-20 09:10:45.184550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:14825 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-11-20 09:10:45.184558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.265 [2024-11-20 09:10:45.195005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.265 [2024-11-20 09:10:45.195025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:1816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-11-20 09:10:45.195033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.265 [2024-11-20 09:10:45.204263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.265 [2024-11-20 09:10:45.204283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:6242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-11-20 09:10:45.204291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.265 [2024-11-20 09:10:45.212929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.265 [2024-11-20 09:10:45.212956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12243 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-11-20 09:10:45.212966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.265 [2024-11-20 09:10:45.222544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.265 [2024-11-20 09:10:45.222567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:7220 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-11-20 09:10:45.222575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.265 [2024-11-20 09:10:45.231865] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.265 [2024-11-20 09:10:45.231886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:5315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-11-20 09:10:45.231894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.265 [2024-11-20 09:10:45.241301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.265 [2024-11-20 09:10:45.241322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:10413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-11-20 09:10:45.241334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.265 [2024-11-20 09:10:45.252199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.265 [2024-11-20 09:10:45.252221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-11-20 09:10:45.252229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.265 [2024-11-20 09:10:45.261439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.265 [2024-11-20 09:10:45.261461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:21480 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-11-20 09:10:45.261470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.265 [2024-11-20 09:10:45.269814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.265 [2024-11-20 09:10:45.269838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-11-20 09:10:45.269848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.265 [2024-11-20 09:10:45.282617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.265 [2024-11-20 09:10:45.282640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:5887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-11-20 09:10:45.282648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.265 [2024-11-20 09:10:45.292923] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.265 [2024-11-20 09:10:45.292944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22020 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-11-20 09:10:45.292959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.265 [2024-11-20 09:10:45.301758] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.265 [2024-11-20 09:10:45.301782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16702 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.265 [2024-11-20 09:10:45.301791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.525 [2024-11-20 09:10:45.314385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.525 [2024-11-20 09:10:45.314409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:18916 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.525 [2024-11-20 09:10:45.314417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.525 [2024-11-20 09:10:45.324400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.525 [2024-11-20 09:10:45.324422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:19018 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.525 [2024-11-20 09:10:45.324430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.525 [2024-11-20 09:10:45.333244] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.525 [2024-11-20 09:10:45.333265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:22100 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.525 [2024-11-20 09:10:45.333273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.525 [2024-11-20 09:10:45.343269] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.525 [2024-11-20 09:10:45.343290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.525 [2024-11-20 09:10:45.343298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.525 [2024-11-20 09:10:45.352077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.525 [2024-11-20 09:10:45.352098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:23189 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.525 [2024-11-20 09:10:45.352106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.525 [2024-11-20 09:10:45.361849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.525 [2024-11-20 09:10:45.361870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.525 [2024-11-20 09:10:45.361878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.525 [2024-11-20 09:10:45.372288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.525 [2024-11-20 09:10:45.372308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:10080 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.525 [2024-11-20 09:10:45.372317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.525 [2024-11-20 09:10:45.381904] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.525 [2024-11-20 09:10:45.381926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:19045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.525 [2024-11-20 09:10:45.381934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.525 [2024-11-20 09:10:45.390845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.525 [2024-11-20 09:10:45.390865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:20370 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.525 [2024-11-20 09:10:45.390873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.525 [2024-11-20 09:10:45.402079] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.525 [2024-11-20 09:10:45.402100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:25410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.525 [2024-11-20 09:10:45.402108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.525 [2024-11-20 09:10:45.414738] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.525 [2024-11-20 09:10:45.414760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.525 [2024-11-20 09:10:45.414772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.525 [2024-11-20 09:10:45.422421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.525 [2024-11-20 09:10:45.422441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:11610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.525 [2024-11-20 09:10:45.422450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.525 [2024-11-20 09:10:45.434167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.525 [2024-11-20 09:10:45.434188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.525 [2024-11-20 09:10:45.434196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.525 [2024-11-20 09:10:45.446816] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.525 [2024-11-20 09:10:45.446837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.525 [2024-11-20 09:10:45.446845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.525 [2024-11-20 09:10:45.459491] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.525 [2024-11-20 09:10:45.459513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:16402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.525 [2024-11-20 09:10:45.459521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.525 [2024-11-20 09:10:45.472066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.525 [2024-11-20 09:10:45.472087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:8247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.525 [2024-11-20 09:10:45.472095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.525 [2024-11-20 09:10:45.483945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.525 [2024-11-20 09:10:45.483974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:2491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.525 [2024-11-20 09:10:45.483983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.525 [2024-11-20 09:10:45.493839] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.525 [2024-11-20 09:10:45.493860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:1502 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.525 [2024-11-20 09:10:45.493869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.525 [2024-11-20 09:10:45.502592] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.525 [2024-11-20 09:10:45.502614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6019 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.525 [2024-11-20 09:10:45.502622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.525 [2024-11-20 09:10:45.512905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.525 [2024-11-20 09:10:45.512932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:17894 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.525 [2024-11-20 09:10:45.512940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.525 [2024-11-20 09:10:45.521294] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.525 [2024-11-20 09:10:45.521315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.525 [2024-11-20 09:10:45.521323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.525 [2024-11-20 09:10:45.533847] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.525 [2024-11-20 09:10:45.533869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.525 [2024-11-20 09:10:45.533877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.525 [2024-11-20 09:10:45.542783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.525 [2024-11-20 09:10:45.542803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:14151 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.525 [2024-11-20 09:10:45.542810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.525 [2024-11-20 09:10:45.552555] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.525 [2024-11-20 09:10:45.552576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16959 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.525 [2024-11-20 09:10:45.552584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.785 [2024-11-20 09:10:45.565500] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.785 [2024-11-20 09:10:45.565524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:20298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.785 [2024-11-20 09:10:45.565533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.785 [2024-11-20 09:10:45.573885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.785 [2024-11-20 09:10:45.573907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.785 [2024-11-20 09:10:45.573916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.785 [2024-11-20 09:10:45.585263] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.785 [2024-11-20 09:10:45.585285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2011 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.785 [2024-11-20 09:10:45.585293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.785 [2024-11-20 09:10:45.597173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.785 [2024-11-20 09:10:45.597195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:22542 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.785 [2024-11-20 09:10:45.597203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.785 [2024-11-20 09:10:45.605260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.785 [2024-11-20 09:10:45.605283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:20268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.785 [2024-11-20 09:10:45.605291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.785 [2024-11-20 09:10:45.615741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.785 [2024-11-20 09:10:45.615762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:24239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.785 [2024-11-20 09:10:45.615770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.785 [2024-11-20 09:10:45.625596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.785 [2024-11-20 09:10:45.625617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:23339 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.785 [2024-11-20 09:10:45.625625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.785 [2024-11-20 09:10:45.633729] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.785 [2024-11-20 09:10:45.633750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:25354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.785 [2024-11-20 09:10:45.633758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.785 [2024-11-20 09:10:45.644391] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.785 [2024-11-20 09:10:45.644412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:3627 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.786 [2024-11-20 09:10:45.644420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.786 [2024-11-20 09:10:45.652773] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.786 [2024-11-20 09:10:45.652794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:21559 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.786 [2024-11-20 09:10:45.652802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.786 [2024-11-20 09:10:45.663849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.786 [2024-11-20 09:10:45.663870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:14998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.786 [2024-11-20 09:10:45.663878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.786 [2024-11-20 09:10:45.675856] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.786 [2024-11-20 09:10:45.675876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:12766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.786 [2024-11-20 09:10:45.675884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.786 [2024-11-20 09:10:45.688851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.786 [2024-11-20 09:10:45.688873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:24990 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.786 [2024-11-20 09:10:45.688885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.786 [2024-11-20 09:10:45.697883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.786 [2024-11-20 09:10:45.697903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:4198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.786 [2024-11-20 09:10:45.697911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.786 [2024-11-20 09:10:45.707818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.786 [2024-11-20 09:10:45.707839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:24810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.786 [2024-11-20 09:10:45.707846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.786 [2024-11-20 09:10:45.720537] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.786 [2024-11-20 09:10:45.720558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:20516 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.786 [2024-11-20 09:10:45.720566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.786 [2024-11-20 09:10:45.728905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.786 [2024-11-20 09:10:45.728923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:24394 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.786 [2024-11-20 09:10:45.728931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.786 [2024-11-20 09:10:45.740858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.786 [2024-11-20 09:10:45.740879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13196 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.786 [2024-11-20 09:10:45.740887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.786 [2024-11-20 09:10:45.754091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.786 [2024-11-20 09:10:45.754112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.786 [2024-11-20 09:10:45.754120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.786 [2024-11-20 09:10:45.765172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.786 [2024-11-20 09:10:45.765192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:10340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.786 [2024-11-20 09:10:45.765200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.786 [2024-11-20 09:10:45.773240] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.786 [2024-11-20 09:10:45.773260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:1892 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.786 [2024-11-20 09:10:45.773269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.786 [2024-11-20 09:10:45.783657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.786 [2024-11-20 09:10:45.783677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:12267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.786 [2024-11-20 09:10:45.783686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.786 [2024-11-20 09:10:45.796752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.786 [2024-11-20 09:10:45.796774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:11440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.786 [2024-11-20 09:10:45.796782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.786 [2024-11-20 09:10:45.805288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.786 [2024-11-20 09:10:45.805307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23063 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.786 [2024-11-20 09:10:45.805315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:29.786 [2024-11-20 09:10:45.817034] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:29.786 [2024-11-20 09:10:45.817055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:25235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:29.786 [2024-11-20 09:10:45.817063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.046 [2024-11-20 09:10:45.829149] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.046 [2024-11-20 09:10:45.829171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:2999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.046 [2024-11-20 09:10:45.829180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.046 [2024-11-20 09:10:45.839419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.046 [2024-11-20 09:10:45.839440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.046 [2024-11-20 09:10:45.839449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.046 [2024-11-20 09:10:45.849501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.046 [2024-11-20 09:10:45.849521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:10153 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.046 [2024-11-20 09:10:45.849529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.046 [2024-11-20 09:10:45.858129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.046 [2024-11-20 09:10:45.858149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:17387 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.046 [2024-11-20 09:10:45.858157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.046 [2024-11-20 09:10:45.870698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.046 [2024-11-20 09:10:45.870720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:9997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.046 [2024-11-20 09:10:45.870732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.046 [2024-11-20 09:10:45.883616] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.046 [2024-11-20 09:10:45.883637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:13737 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.046 [2024-11-20 09:10:45.883645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.046 [2024-11-20 09:10:45.891886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.046 [2024-11-20 09:10:45.891907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.046 [2024-11-20 09:10:45.891915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.046 [2024-11-20 09:10:45.903922] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.046 [2024-11-20 09:10:45.903943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:22257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.046 [2024-11-20 09:10:45.903958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.046 [2024-11-20 09:10:45.914402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.046 [2024-11-20 09:10:45.914423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:17922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.046 [2024-11-20 09:10:45.914432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.046 [2024-11-20 09:10:45.922926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.046 [2024-11-20 09:10:45.922951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.046 [2024-11-20 09:10:45.922960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.046 [2024-11-20 09:10:45.934378] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.046 [2024-11-20 09:10:45.934397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:1722 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.046 [2024-11-20 09:10:45.934405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.046 [2024-11-20 09:10:45.943496] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.046 [2024-11-20 09:10:45.943516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:2460 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.046 [2024-11-20 09:10:45.943524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.046 [2024-11-20 09:10:45.952347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.046 [2024-11-20 09:10:45.952367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2710 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.046 [2024-11-20 09:10:45.952375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.046 [2024-11-20 09:10:45.962342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.046 [2024-11-20 09:10:45.962366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:14849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.046 [2024-11-20 09:10:45.962374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.046 [2024-11-20 09:10:45.972212] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.046 [2024-11-20 09:10:45.972232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:3518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.046 [2024-11-20 09:10:45.972241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.046 [2024-11-20 09:10:45.983123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.046 [2024-11-20 09:10:45.983143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:14930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.046 [2024-11-20 09:10:45.983151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.046 [2024-11-20 09:10:45.991674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.046 [2024-11-20 09:10:45.991693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:8891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.046 [2024-11-20 09:10:45.991701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.046 [2024-11-20 09:10:46.002120] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.046 [2024-11-20 09:10:46.002140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18408 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.046 [2024-11-20 09:10:46.002148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.046 [2024-11-20 09:10:46.013649] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.046 [2024-11-20 09:10:46.013669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:6053 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.047 [2024-11-20 09:10:46.013677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.047 [2024-11-20 09:10:46.024303] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.047 [2024-11-20 09:10:46.024323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:24039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.047 [2024-11-20 09:10:46.024331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.047 [2024-11-20 09:10:46.033502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.047 [2024-11-20 09:10:46.033521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:23122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.047 [2024-11-20 09:10:46.033529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.047 [2024-11-20 09:10:46.042822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.047 [2024-11-20 09:10:46.042843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:10980 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.047 [2024-11-20 09:10:46.042851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.047 [2024-11-20 09:10:46.053992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.047 [2024-11-20 09:10:46.054012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:12169 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.047 [2024-11-20 09:10:46.054020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.047 [2024-11-20 09:10:46.062513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.047 [2024-11-20 09:10:46.062533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.047 [2024-11-20 09:10:46.062542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.047 [2024-11-20 09:10:46.073535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.047 [2024-11-20 09:10:46.073556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2349 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.047 [2024-11-20 09:10:46.073564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.047 24567.00 IOPS, 95.96 MiB/s [2024-11-20T08:10:46.088Z] [2024-11-20 09:10:46.084678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.047 [2024-11-20 09:10:46.084701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:17586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.047 [2024-11-20 09:10:46.084710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.306 [2024-11-20 09:10:46.094231] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.306 [2024-11-20 09:10:46.094253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:20340 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.306 [2024-11-20 09:10:46.094261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.306 [2024-11-20 09:10:46.104117] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.306 [2024-11-20 09:10:46.104138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:4404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.306 [2024-11-20 09:10:46.104146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.306 [2024-11-20 09:10:46.112288] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.306 [2024-11-20 09:10:46.112308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:3342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.306 [2024-11-20 09:10:46.112316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.306 [2024-11-20 09:10:46.122942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.306 [2024-11-20 09:10:46.122968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6270 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.306 [2024-11-20 09:10:46.122976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.306 [2024-11-20 09:10:46.133124] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.306 [2024-11-20 09:10:46.133144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:25183 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.306 [2024-11-20 09:10:46.133156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.306 [2024-11-20 09:10:46.141407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.306 [2024-11-20 09:10:46.141427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:18633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.306 [2024-11-20 09:10:46.141435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.306 [2024-11-20 09:10:46.150944] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.306 [2024-11-20 09:10:46.150968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:25509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.306 [2024-11-20 09:10:46.150977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.306 [2024-11-20 09:10:46.162043] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.306 [2024-11-20 09:10:46.162063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:1255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.306 [2024-11-20 09:10:46.162071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.306 [2024-11-20 09:10:46.175896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.306 [2024-11-20 09:10:46.175916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:18995 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.306 [2024-11-20 09:10:46.175925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.306 [2024-11-20 09:10:46.184179] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.306 [2024-11-20 09:10:46.184198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:1778 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.306 [2024-11-20 09:10:46.184206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.306 [2024-11-20 09:10:46.194671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.306 [2024-11-20 09:10:46.194691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:7318 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.306 [2024-11-20 09:10:46.194699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.306 [2024-11-20 09:10:46.206959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.306 [2024-11-20 09:10:46.206980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22010 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.306 [2024-11-20 09:10:46.206988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.306 [2024-11-20 09:10:46.218168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.306 [2024-11-20 09:10:46.218187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:10698 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.306 [2024-11-20 09:10:46.218195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.306 [2024-11-20 09:10:46.227111] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.306 [2024-11-20 09:10:46.227133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20150 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.306 [2024-11-20 09:10:46.227141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.307 [2024-11-20 09:10:46.237843] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.307 [2024-11-20 09:10:46.237863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.307 [2024-11-20 09:10:46.237871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.307 [2024-11-20 09:10:46.248784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.307 [2024-11-20 09:10:46.248804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:2870 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.307 [2024-11-20 09:10:46.248812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.307 [2024-11-20 09:10:46.256886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.307 [2024-11-20 09:10:46.256906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:25326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.307 [2024-11-20 09:10:46.256914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.307 [2024-11-20 09:10:46.266887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.307 [2024-11-20 09:10:46.266907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:18828 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.307 [2024-11-20 09:10:46.266915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.307 [2024-11-20 09:10:46.277282] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.307 [2024-11-20 09:10:46.277302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:11700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.307 [2024-11-20 09:10:46.277310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.307 [2024-11-20 09:10:46.288529] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.307 [2024-11-20 09:10:46.288549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:4486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.307 [2024-11-20 09:10:46.288557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.307 [2024-11-20 09:10:46.297167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.307 [2024-11-20 09:10:46.297187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:88 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.307 [2024-11-20 09:10:46.297195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.307 [2024-11-20 09:10:46.309130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.307 [2024-11-20 09:10:46.309149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:6487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.307 [2024-11-20 09:10:46.309160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.307 [2024-11-20 09:10:46.321568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.307 [2024-11-20 09:10:46.321589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:23766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.307 [2024-11-20 09:10:46.321597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.307 [2024-11-20 09:10:46.334196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.307 [2024-11-20 09:10:46.334217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:4425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.307 [2024-11-20 09:10:46.334225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.566 [2024-11-20 09:10:46.347380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.566 [2024-11-20 09:10:46.347401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15282 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.566 [2024-11-20 09:10:46.347410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.566 [2024-11-20 09:10:46.360057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.566 [2024-11-20 09:10:46.360078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:622 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.566 [2024-11-20 09:10:46.360087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.566 [2024-11-20 09:10:46.371421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.566 [2024-11-20 09:10:46.371442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:54 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.566 [2024-11-20 09:10:46.371450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.566 [2024-11-20 09:10:46.382429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.566 [2024-11-20 09:10:46.382450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:10316 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.566 [2024-11-20 09:10:46.382458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.566 [2024-11-20 09:10:46.390713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.566 [2024-11-20 09:10:46.390733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:16278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.566 [2024-11-20 09:10:46.390741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.566 [2024-11-20 09:10:46.400781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.566 [2024-11-20 09:10:46.400801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:14741 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.566 [2024-11-20 09:10:46.400809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.566 [2024-11-20 09:10:46.410426] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.566 [2024-11-20 09:10:46.410449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:16166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.566 [2024-11-20 09:10:46.410457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.566 [2024-11-20 09:10:46.420534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.566 [2024-11-20 09:10:46.420554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:9165 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.566 [2024-11-20 09:10:46.420563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.566 [2024-11-20 09:10:46.428599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.566 [2024-11-20 09:10:46.428619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:11659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.566 [2024-11-20 09:10:46.428627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.566 [2024-11-20 09:10:46.439044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.566 [2024-11-20 09:10:46.439064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:4611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.566 [2024-11-20 09:10:46.439072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.566 [2024-11-20 09:10:46.449611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.566 [2024-11-20 09:10:46.449631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.566 [2024-11-20 09:10:46.449640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.566 [2024-11-20 09:10:46.457536] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.566 [2024-11-20 09:10:46.457556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.567 [2024-11-20 09:10:46.457564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.567 [2024-11-20 09:10:46.469129] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.567 [2024-11-20 09:10:46.469149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:16397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.567 [2024-11-20 09:10:46.469157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.567 [2024-11-20 09:10:46.480567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.567 [2024-11-20 09:10:46.480587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:16222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.567 [2024-11-20 09:10:46.480595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.567 [2024-11-20 09:10:46.489090] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.567 [2024-11-20 09:10:46.489110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:4268 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.567 [2024-11-20 09:10:46.489118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.567 [2024-11-20 09:10:46.501883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.567 [2024-11-20 09:10:46.501903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:25199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.567 [2024-11-20 09:10:46.501911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.567 [2024-11-20 09:10:46.513535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.567 [2024-11-20 09:10:46.513555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18563 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.567 [2024-11-20 09:10:46.513563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.567 [2024-11-20 09:10:46.522891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.567 [2024-11-20 09:10:46.522911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:13241 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.567 [2024-11-20 09:10:46.522919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.567 [2024-11-20 09:10:46.535146] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.567 [2024-11-20 09:10:46.535166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:22420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.567 [2024-11-20 09:10:46.535174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:108 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.567 [2024-11-20 09:10:46.543418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.567 [2024-11-20 09:10:46.543438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:21306 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.567 [2024-11-20 09:10:46.543446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.567 [2024-11-20 09:10:46.555899] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.567 [2024-11-20 09:10:46.555919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.567 [2024-11-20 09:10:46.555928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.567 [2024-11-20 09:10:46.566452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.567 [2024-11-20 09:10:46.566472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.567 [2024-11-20 09:10:46.566480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.567 [2024-11-20 09:10:46.575011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.567 [2024-11-20 09:10:46.575031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:23219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.567 [2024-11-20 09:10:46.575039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.567 [2024-11-20 09:10:46.587613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.567 [2024-11-20 09:10:46.587634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:15321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.567 [2024-11-20 09:10:46.587647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.567 [2024-11-20 09:10:46.595968] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.567 [2024-11-20 09:10:46.595989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8105 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.567 [2024-11-20 09:10:46.595997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.826 [2024-11-20 09:10:46.608234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.826 [2024-11-20 09:10:46.608257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9723 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.826 [2024-11-20 09:10:46.608265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.826 [2024-11-20 09:10:46.616986] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.826 [2024-11-20 09:10:46.617007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:22281 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.826 [2024-11-20 09:10:46.617015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.826 [2024-11-20 09:10:46.629194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.826 [2024-11-20 09:10:46.629214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21689 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.826 [2024-11-20 09:10:46.629222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.826 [2024-11-20 09:10:46.641878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.826 [2024-11-20 09:10:46.641899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:5595 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.826 [2024-11-20 09:10:46.641907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.827 [2024-11-20 09:10:46.651396] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.827 [2024-11-20 09:10:46.651418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:12860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.827 [2024-11-20 09:10:46.651426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.827 [2024-11-20 09:10:46.659472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.827 [2024-11-20 09:10:46.659493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.827 [2024-11-20 09:10:46.659502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.827 [2024-11-20 09:10:46.671089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.827 [2024-11-20 09:10:46.671111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.827 [2024-11-20 09:10:46.671120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.827 [2024-11-20 09:10:46.681887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.827 [2024-11-20 09:10:46.681908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.827 [2024-11-20 09:10:46.681917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.827 [2024-11-20 09:10:46.690655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.827 [2024-11-20 09:10:46.690677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:22068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.827 [2024-11-20 09:10:46.690685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.827 [2024-11-20 09:10:46.701280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.827 [2024-11-20 09:10:46.701303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:23824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.827 [2024-11-20 09:10:46.701312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.827 [2024-11-20 09:10:46.711622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.827 [2024-11-20 09:10:46.711644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.827 [2024-11-20 09:10:46.711653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.827 [2024-11-20 09:10:46.722116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.827 [2024-11-20 09:10:46.722137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:2889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.827 [2024-11-20 09:10:46.722145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.827 [2024-11-20 09:10:46.730882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.827 [2024-11-20 09:10:46.730903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:25129 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.827 [2024-11-20 09:10:46.730911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.827 [2024-11-20 09:10:46.742578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.827 [2024-11-20 09:10:46.742600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:13429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.827 [2024-11-20 09:10:46.742609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.827 [2024-11-20 09:10:46.753434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.827 [2024-11-20 09:10:46.753456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.827 [2024-11-20 09:10:46.753464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.827 [2024-11-20 09:10:46.765195] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.827 [2024-11-20 09:10:46.765217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.827 [2024-11-20 09:10:46.765229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.827 [2024-11-20 09:10:46.774193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.827 [2024-11-20 09:10:46.774214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:6353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.827 [2024-11-20 09:10:46.774222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.827 [2024-11-20 09:10:46.785879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.827 [2024-11-20 09:10:46.785901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:11383 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.827 [2024-11-20 09:10:46.785909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.827 [2024-11-20 09:10:46.797009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.827 [2024-11-20 09:10:46.797030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:1445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.827 [2024-11-20 09:10:46.797038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.827 [2024-11-20 09:10:46.805477] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.827 [2024-11-20 09:10:46.805497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:11810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.827 [2024-11-20 09:10:46.805505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.827 [2024-11-20 09:10:46.816199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.827 [2024-11-20 09:10:46.816220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:17745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.827 [2024-11-20 09:10:46.816228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.827 [2024-11-20 09:10:46.826621] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.827 [2024-11-20 09:10:46.826641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13769 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.827 [2024-11-20 09:10:46.826649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.827 [2024-11-20 09:10:46.835997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.827 [2024-11-20 09:10:46.836018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:23687 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.827 [2024-11-20 09:10:46.836025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.827 [2024-11-20 09:10:46.846814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.827 [2024-11-20 09:10:46.846834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:24988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.827 [2024-11-20 09:10:46.846842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:30.827 [2024-11-20 09:10:46.857344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:30.827 [2024-11-20 09:10:46.857370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:3391 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:30.827 [2024-11-20 09:10:46.857378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.087 [2024-11-20 09:10:46.866058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:31.087 [2024-11-20 09:10:46.866081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:25354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.087 [2024-11-20 09:10:46.866090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.087 [2024-11-20 09:10:46.878501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:31.087 [2024-11-20 09:10:46.878522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:20586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.087 [2024-11-20 09:10:46.878530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.087 [2024-11-20 09:10:46.887934] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:31.087 [2024-11-20 09:10:46.887959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:497 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.087 [2024-11-20 09:10:46.887967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.087 [2024-11-20 09:10:46.897930] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:31.087 [2024-11-20 09:10:46.897957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:671 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.087 [2024-11-20 09:10:46.897966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.087 [2024-11-20 09:10:46.906476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:31.087 [2024-11-20 09:10:46.906497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21197 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.087 [2024-11-20 09:10:46.906506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.087 [2024-11-20 09:10:46.916618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:31.087 [2024-11-20 09:10:46.916639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:23500 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.087 [2024-11-20 09:10:46.916647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.087 [2024-11-20 09:10:46.926392] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:31.087 [2024-11-20 09:10:46.926412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:21483 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.087 [2024-11-20 09:10:46.926420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.087 [2024-11-20 09:10:46.935766] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:31.087 [2024-11-20 09:10:46.935788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14824 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.087 [2024-11-20 09:10:46.935797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.087 [2024-11-20 09:10:46.944130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:31.087 [2024-11-20 09:10:46.944151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:21909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.087 [2024-11-20 09:10:46.944160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.087 [2024-11-20 09:10:46.955003] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:31.087 [2024-11-20 09:10:46.955024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:8860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.087 [2024-11-20 09:10:46.955032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.087 [2024-11-20 09:10:46.963862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:31.087 [2024-11-20 09:10:46.963883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:2371 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.087 [2024-11-20 09:10:46.963892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.087 [2024-11-20 09:10:46.975742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:31.087 [2024-11-20 09:10:46.975764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:10238 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.087 [2024-11-20 09:10:46.975772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.087 [2024-11-20 09:10:46.984198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:31.087 [2024-11-20 09:10:46.984219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:23504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.087 [2024-11-20 09:10:46.984227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.087 [2024-11-20 09:10:46.996112] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:31.087 [2024-11-20 09:10:46.996134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11320 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.087 [2024-11-20 09:10:46.996142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.087 [2024-11-20 09:10:47.009653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:31.087 [2024-11-20 09:10:47.009674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.087 [2024-11-20 09:10:47.009682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.087 [2024-11-20 09:10:47.020828] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:31.087 [2024-11-20 09:10:47.020848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:15416 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.087 [2024-11-20 09:10:47.020857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.087 [2024-11-20 09:10:47.029353] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:31.087 [2024-11-20 09:10:47.029373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:18402 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.087 [2024-11-20 09:10:47.029384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.087 [2024-11-20 09:10:47.040575] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:31.087 [2024-11-20 09:10:47.040597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:17771 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.087 [2024-11-20 09:10:47.040605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.087 [2024-11-20 09:10:47.050081] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:31.087 [2024-11-20 09:10:47.050101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:5540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.087 [2024-11-20 09:10:47.050109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.087 [2024-11-20 09:10:47.059485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:31.087 [2024-11-20 09:10:47.059505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:24156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.087 [2024-11-20 09:10:47.059513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.087 [2024-11-20 09:10:47.069635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:31.087 [2024-11-20 09:10:47.069655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:16412 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.087 [2024-11-20 09:10:47.069663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.087 [2024-11-20 09:10:47.079038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1ea7d70) 00:26:31.087 [2024-11-20 09:10:47.079059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:31.087 [2024-11-20 09:10:47.079067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:31.087 24619.00 IOPS, 96.17 MiB/s 00:26:31.087 Latency(us) 00:26:31.087 [2024-11-20T08:10:47.128Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.087 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:31.087 nvme0n1 : 2.04 24162.67 94.39 0.00 0.00 5189.79 2592.95 43538.70 00:26:31.087 [2024-11-20T08:10:47.128Z] =================================================================================================================== 00:26:31.088 [2024-11-20T08:10:47.129Z] Total : 24162.67 94.39 0.00 0.00 5189.79 2592.95 43538.70 00:26:31.347 { 00:26:31.347 "results": [ 00:26:31.347 { 00:26:31.347 "job": "nvme0n1", 00:26:31.347 "core_mask": "0x2", 00:26:31.347 "workload": "randread", 00:26:31.347 "status": "finished", 00:26:31.347 "queue_depth": 128, 00:26:31.347 "io_size": 4096, 00:26:31.347 "runtime": 2.043069, 00:26:31.347 "iops": 24162.669004326333, 00:26:31.347 "mibps": 94.38542579814974, 00:26:31.347 "io_failed": 0, 00:26:31.347 "io_timeout": 0, 00:26:31.347 "avg_latency_us": 5189.786667183363, 00:26:31.347 "min_latency_us": 2592.946086956522, 00:26:31.347 "max_latency_us": 43538.69913043478 00:26:31.347 } 00:26:31.347 ], 00:26:31.347 "core_count": 1 00:26:31.347 } 00:26:31.347 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:31.347 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:31.347 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:31.347 | .driver_specific 00:26:31.347 | .nvme_error 00:26:31.347 | .status_code 00:26:31.347 | .command_transient_transport_error' 00:26:31.347 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:31.347 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 193 > 0 )) 00:26:31.347 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2487186 00:26:31.347 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2487186 ']' 00:26:31.347 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2487186 00:26:31.347 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:31.347 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:31.347 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2487186 00:26:31.614 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:31.614 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:31.614 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2487186' 00:26:31.614 killing process with pid 2487186 00:26:31.614 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2487186 00:26:31.614 Received shutdown signal, test time was about 2.000000 seconds 00:26:31.614 00:26:31.614 Latency(us) 00:26:31.614 [2024-11-20T08:10:47.655Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:31.614 [2024-11-20T08:10:47.655Z] =================================================================================================================== 00:26:31.614 [2024-11-20T08:10:47.655Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:31.614 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2487186 00:26:31.614 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:31.614 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:31.614 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:31.614 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:31.614 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:31.614 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2487663 00:26:31.614 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2487663 /var/tmp/bperf.sock 00:26:31.614 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:31.614 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2487663 ']' 00:26:31.614 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:31.614 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:31.614 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:31.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:31.614 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:31.614 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:31.614 [2024-11-20 09:10:47.601858] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:26:31.614 [2024-11-20 09:10:47.601906] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2487663 ] 00:26:31.614 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:31.614 Zero copy mechanism will not be used. 00:26:31.872 [2024-11-20 09:10:47.676540] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.872 [2024-11-20 09:10:47.718593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:31.872 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:31.872 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:31.872 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:31.873 09:10:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:32.132 09:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:32.132 09:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.132 09:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:32.132 09:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.132 09:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:32.132 09:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:32.392 nvme0n1 00:26:32.392 09:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:32.392 09:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:32.392 09:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:32.392 09:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:32.392 09:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:32.392 09:10:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:32.653 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:32.653 Zero copy mechanism will not be used. 00:26:32.653 Running I/O for 2 seconds... 00:26:32.653 [2024-11-20 09:10:48.436764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.653 [2024-11-20 09:10:48.436802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.653 [2024-11-20 09:10:48.436813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.653 [2024-11-20 09:10:48.442080] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.653 [2024-11-20 09:10:48.442107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.653 [2024-11-20 09:10:48.442117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.653 [2024-11-20 09:10:48.447585] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.653 [2024-11-20 09:10:48.447608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.653 [2024-11-20 09:10:48.447616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.653 [2024-11-20 09:10:48.453012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.653 [2024-11-20 09:10:48.453036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.653 [2024-11-20 09:10:48.453044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.653 [2024-11-20 09:10:48.458342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.653 [2024-11-20 09:10:48.458364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.653 [2024-11-20 09:10:48.458373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.653 [2024-11-20 09:10:48.463620] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.653 [2024-11-20 09:10:48.463642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.653 [2024-11-20 09:10:48.463651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.653 [2024-11-20 09:10:48.468964] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.653 [2024-11-20 09:10:48.468985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.653 [2024-11-20 09:10:48.468995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.653 [2024-11-20 09:10:48.474498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.653 [2024-11-20 09:10:48.474521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.653 [2024-11-20 09:10:48.474529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.653 [2024-11-20 09:10:48.479926] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.653 [2024-11-20 09:10:48.479953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.653 [2024-11-20 09:10:48.479963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.653 [2024-11-20 09:10:48.485622] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.653 [2024-11-20 09:10:48.485645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.653 [2024-11-20 09:10:48.485654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.653 [2024-11-20 09:10:48.491637] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.653 [2024-11-20 09:10:48.491660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.653 [2024-11-20 09:10:48.491672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.653 [2024-11-20 09:10:48.497126] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.653 [2024-11-20 09:10:48.497148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.653 [2024-11-20 09:10:48.497156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.653 [2024-11-20 09:10:48.502731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.653 [2024-11-20 09:10:48.502753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.653 [2024-11-20 09:10:48.502762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.653 [2024-11-20 09:10:48.508105] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.653 [2024-11-20 09:10:48.508127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.653 [2024-11-20 09:10:48.508135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.653 [2024-11-20 09:10:48.513421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.653 [2024-11-20 09:10:48.513443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.653 [2024-11-20 09:10:48.513452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.653 [2024-11-20 09:10:48.518724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.653 [2024-11-20 09:10:48.518746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.653 [2024-11-20 09:10:48.518754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.653 [2024-11-20 09:10:48.523999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.653 [2024-11-20 09:10:48.524020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.653 [2024-11-20 09:10:48.524028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.653 [2024-11-20 09:10:48.529333] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.653 [2024-11-20 09:10:48.529355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.653 [2024-11-20 09:10:48.529363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.653 [2024-11-20 09:10:48.534725] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.653 [2024-11-20 09:10:48.534747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.653 [2024-11-20 09:10:48.534755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.653 [2024-11-20 09:10:48.540340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.653 [2024-11-20 09:10:48.540365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.653 [2024-11-20 09:10:48.540374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.654 [2024-11-20 09:10:48.545750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.654 [2024-11-20 09:10:48.545771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.654 [2024-11-20 09:10:48.545779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.654 [2024-11-20 09:10:48.551207] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.654 [2024-11-20 09:10:48.551229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.654 [2024-11-20 09:10:48.551237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.654 [2024-11-20 09:10:48.556795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.654 [2024-11-20 09:10:48.556816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.654 [2024-11-20 09:10:48.556825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.654 [2024-11-20 09:10:48.562249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.654 [2024-11-20 09:10:48.562271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.654 [2024-11-20 09:10:48.562279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.654 [2024-11-20 09:10:48.567659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.654 [2024-11-20 09:10:48.567680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.654 [2024-11-20 09:10:48.567688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.654 [2024-11-20 09:10:48.573029] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.654 [2024-11-20 09:10:48.573051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.654 [2024-11-20 09:10:48.573059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.654 [2024-11-20 09:10:48.578371] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.654 [2024-11-20 09:10:48.578392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.654 [2024-11-20 09:10:48.578400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.654 [2024-11-20 09:10:48.583849] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.654 [2024-11-20 09:10:48.583870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.654 [2024-11-20 09:10:48.583879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.654 [2024-11-20 09:10:48.589257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.654 [2024-11-20 09:10:48.589279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.654 [2024-11-20 09:10:48.589287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.654 [2024-11-20 09:10:48.594769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.654 [2024-11-20 09:10:48.594791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.654 [2024-11-20 09:10:48.594799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.654 [2024-11-20 09:10:48.600337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.654 [2024-11-20 09:10:48.600358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.654 [2024-11-20 09:10:48.600366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.654 [2024-11-20 09:10:48.605929] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.654 [2024-11-20 09:10:48.605963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.654 [2024-11-20 09:10:48.605971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.654 [2024-11-20 09:10:48.611428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.654 [2024-11-20 09:10:48.611450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.654 [2024-11-20 09:10:48.611459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.654 [2024-11-20 09:10:48.616956] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.654 [2024-11-20 09:10:48.616979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.654 [2024-11-20 09:10:48.616986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.654 [2024-11-20 09:10:48.622419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.654 [2024-11-20 09:10:48.622440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.654 [2024-11-20 09:10:48.622448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.654 [2024-11-20 09:10:48.627715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.654 [2024-11-20 09:10:48.627737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.654 [2024-11-20 09:10:48.627744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.654 [2024-11-20 09:10:48.633655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.654 [2024-11-20 09:10:48.633677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.654 [2024-11-20 09:10:48.633691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.654 [2024-11-20 09:10:48.640385] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.654 [2024-11-20 09:10:48.640406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.654 [2024-11-20 09:10:48.640414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.654 [2024-11-20 09:10:48.647979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.654 [2024-11-20 09:10:48.648002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.654 [2024-11-20 09:10:48.648010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.654 [2024-11-20 09:10:48.654770] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.654 [2024-11-20 09:10:48.654793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.654 [2024-11-20 09:10:48.654802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.654 [2024-11-20 09:10:48.661517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.654 [2024-11-20 09:10:48.661540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.654 [2024-11-20 09:10:48.661548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.654 [2024-11-20 09:10:48.667495] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.654 [2024-11-20 09:10:48.667517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.654 [2024-11-20 09:10:48.667526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.654 [2024-11-20 09:10:48.673834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.654 [2024-11-20 09:10:48.673855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.654 [2024-11-20 09:10:48.673863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.654 [2024-11-20 09:10:48.677965] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.654 [2024-11-20 09:10:48.677986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.655 [2024-11-20 09:10:48.677995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.655 [2024-11-20 09:10:48.684586] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.655 [2024-11-20 09:10:48.684609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.655 [2024-11-20 09:10:48.684617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.915 [2024-11-20 09:10:48.692390] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.915 [2024-11-20 09:10:48.692415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.915 [2024-11-20 09:10:48.692424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.915 [2024-11-20 09:10:48.698652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.915 [2024-11-20 09:10:48.698677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.915 [2024-11-20 09:10:48.698687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.915 [2024-11-20 09:10:48.704143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.915 [2024-11-20 09:10:48.704166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.915 [2024-11-20 09:10:48.704175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.915 [2024-11-20 09:10:48.709513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.915 [2024-11-20 09:10:48.709535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.915 [2024-11-20 09:10:48.709544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.915 [2024-11-20 09:10:48.714955] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.915 [2024-11-20 09:10:48.714976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.915 [2024-11-20 09:10:48.714984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.915 [2024-11-20 09:10:48.719902] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.915 [2024-11-20 09:10:48.719924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.915 [2024-11-20 09:10:48.719932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.915 [2024-11-20 09:10:48.725012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.915 [2024-11-20 09:10:48.725034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.915 [2024-11-20 09:10:48.725042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.915 [2024-11-20 09:10:48.730123] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.915 [2024-11-20 09:10:48.730145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.915 [2024-11-20 09:10:48.730153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.915 [2024-11-20 09:10:48.735218] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.915 [2024-11-20 09:10:48.735239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.915 [2024-11-20 09:10:48.735252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.915 [2024-11-20 09:10:48.738086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.915 [2024-11-20 09:10:48.738107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.915 [2024-11-20 09:10:48.738115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.915 [2024-11-20 09:10:48.743577] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.915 [2024-11-20 09:10:48.743598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.915 [2024-11-20 09:10:48.743606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.915 [2024-11-20 09:10:48.749198] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.915 [2024-11-20 09:10:48.749219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.915 [2024-11-20 09:10:48.749227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.915 [2024-11-20 09:10:48.755032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.915 [2024-11-20 09:10:48.755054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.915 [2024-11-20 09:10:48.755062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.915 [2024-11-20 09:10:48.760461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.915 [2024-11-20 09:10:48.760481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.915 [2024-11-20 09:10:48.760489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.915 [2024-11-20 09:10:48.765791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.915 [2024-11-20 09:10:48.765812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.915 [2024-11-20 09:10:48.765820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.915 [2024-11-20 09:10:48.771215] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.915 [2024-11-20 09:10:48.771236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.915 [2024-11-20 09:10:48.771244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.915 [2024-11-20 09:10:48.776741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.915 [2024-11-20 09:10:48.776762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.915 [2024-11-20 09:10:48.776770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.915 [2024-11-20 09:10:48.782175] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.915 [2024-11-20 09:10:48.782200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.916 [2024-11-20 09:10:48.782208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.916 [2024-11-20 09:10:48.787657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.916 [2024-11-20 09:10:48.787678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.916 [2024-11-20 09:10:48.787687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.916 [2024-11-20 09:10:48.792957] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.916 [2024-11-20 09:10:48.792978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.916 [2024-11-20 09:10:48.792986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.916 [2024-11-20 09:10:48.798297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.916 [2024-11-20 09:10:48.798317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.916 [2024-11-20 09:10:48.798325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.916 [2024-11-20 09:10:48.803715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.916 [2024-11-20 09:10:48.803735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.916 [2024-11-20 09:10:48.803743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.916 [2024-11-20 09:10:48.809205] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.916 [2024-11-20 09:10:48.809227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.916 [2024-11-20 09:10:48.809235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.916 [2024-11-20 09:10:48.814498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.916 [2024-11-20 09:10:48.814520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.916 [2024-11-20 09:10:48.814528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.916 [2024-11-20 09:10:48.819358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.916 [2024-11-20 09:10:48.819378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.916 [2024-11-20 09:10:48.819386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.916 [2024-11-20 09:10:48.824677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.916 [2024-11-20 09:10:48.824698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.916 [2024-11-20 09:10:48.824706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.916 [2024-11-20 09:10:48.829897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.916 [2024-11-20 09:10:48.829917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.916 [2024-11-20 09:10:48.829924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.916 [2024-11-20 09:10:48.835142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.916 [2024-11-20 09:10:48.835163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.916 [2024-11-20 09:10:48.835171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.916 [2024-11-20 09:10:48.840572] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.916 [2024-11-20 09:10:48.840593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.916 [2024-11-20 09:10:48.840601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.916 [2024-11-20 09:10:48.845971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.916 [2024-11-20 09:10:48.845992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.916 [2024-11-20 09:10:48.846000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.916 [2024-11-20 09:10:48.851072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.916 [2024-11-20 09:10:48.851095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.916 [2024-11-20 09:10:48.851103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.916 [2024-11-20 09:10:48.856262] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.916 [2024-11-20 09:10:48.856283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.916 [2024-11-20 09:10:48.856291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.916 [2024-11-20 09:10:48.861742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.916 [2024-11-20 09:10:48.861763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.916 [2024-11-20 09:10:48.861771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.916 [2024-11-20 09:10:48.867214] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.916 [2024-11-20 09:10:48.867235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.916 [2024-11-20 09:10:48.867243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.916 [2024-11-20 09:10:48.872885] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.916 [2024-11-20 09:10:48.872906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.916 [2024-11-20 09:10:48.872918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.916 [2024-11-20 09:10:48.879226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.916 [2024-11-20 09:10:48.879249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.916 [2024-11-20 09:10:48.879258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.916 [2024-11-20 09:10:48.886685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.916 [2024-11-20 09:10:48.886708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.916 [2024-11-20 09:10:48.886717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.916 [2024-11-20 09:10:48.893012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.916 [2024-11-20 09:10:48.893036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.916 [2024-11-20 09:10:48.893044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.916 [2024-11-20 09:10:48.899518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.916 [2024-11-20 09:10:48.899541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.916 [2024-11-20 09:10:48.899549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.917 [2024-11-20 09:10:48.906342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.917 [2024-11-20 09:10:48.906365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.917 [2024-11-20 09:10:48.906373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.917 [2024-11-20 09:10:48.913983] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.917 [2024-11-20 09:10:48.914005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.917 [2024-11-20 09:10:48.914014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.917 [2024-11-20 09:10:48.917618] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.917 [2024-11-20 09:10:48.917639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.917 [2024-11-20 09:10:48.917647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.917 [2024-11-20 09:10:48.924271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.917 [2024-11-20 09:10:48.924293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.917 [2024-11-20 09:10:48.924301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.917 [2024-11-20 09:10:48.929732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.917 [2024-11-20 09:10:48.929758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.917 [2024-11-20 09:10:48.929766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:32.917 [2024-11-20 09:10:48.935011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.917 [2024-11-20 09:10:48.935032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.917 [2024-11-20 09:10:48.935040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:32.917 [2024-11-20 09:10:48.940293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.917 [2024-11-20 09:10:48.940314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.917 [2024-11-20 09:10:48.940322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:32.917 [2024-11-20 09:10:48.945824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.917 [2024-11-20 09:10:48.945846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.917 [2024-11-20 09:10:48.945853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:32.917 [2024-11-20 09:10:48.952677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:32.917 [2024-11-20 09:10:48.952701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.917 [2024-11-20 09:10:48.952710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.177 [2024-11-20 09:10:48.960548] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.177 [2024-11-20 09:10:48.960571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.177 [2024-11-20 09:10:48.960580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.177 [2024-11-20 09:10:48.967583] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.177 [2024-11-20 09:10:48.967606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.177 [2024-11-20 09:10:48.967614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.177 [2024-11-20 09:10:48.974068] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.177 [2024-11-20 09:10:48.974092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.177 [2024-11-20 09:10:48.974100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.177 [2024-11-20 09:10:48.981170] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.177 [2024-11-20 09:10:48.981193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.177 [2024-11-20 09:10:48.981202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.177 [2024-11-20 09:10:48.988633] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.177 [2024-11-20 09:10:48.988657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.177 [2024-11-20 09:10:48.988665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.177 [2024-11-20 09:10:48.996611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.177 [2024-11-20 09:10:48.996635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.177 [2024-11-20 09:10:48.996644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.177 [2024-11-20 09:10:49.003485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.177 [2024-11-20 09:10:49.003509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.177 [2024-11-20 09:10:49.003517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.177 [2024-11-20 09:10:49.010102] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.177 [2024-11-20 09:10:49.010124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.177 [2024-11-20 09:10:49.010133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.177 [2024-11-20 09:10:49.015406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.177 [2024-11-20 09:10:49.015428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.177 [2024-11-20 09:10:49.015436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.177 [2024-11-20 09:10:49.020696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.177 [2024-11-20 09:10:49.020717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.177 [2024-11-20 09:10:49.020726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.177 [2024-11-20 09:10:49.026011] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.177 [2024-11-20 09:10:49.026032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.177 [2024-11-20 09:10:49.026040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.177 [2024-11-20 09:10:49.031470] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.177 [2024-11-20 09:10:49.031491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.177 [2024-11-20 09:10:49.031499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.177 [2024-11-20 09:10:49.037026] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.177 [2024-11-20 09:10:49.037048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.177 [2024-11-20 09:10:49.037060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.177 [2024-11-20 09:10:49.042422] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.177 [2024-11-20 09:10:49.042443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.177 [2024-11-20 09:10:49.042451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.177 [2024-11-20 09:10:49.047657] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.177 [2024-11-20 09:10:49.047679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.177 [2024-11-20 09:10:49.047687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.177 [2024-11-20 09:10:49.052821] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.177 [2024-11-20 09:10:49.052843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.177 [2024-11-20 09:10:49.052850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.177 [2024-11-20 09:10:49.058141] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.177 [2024-11-20 09:10:49.058162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.177 [2024-11-20 09:10:49.058170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.177 [2024-11-20 09:10:49.063435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.177 [2024-11-20 09:10:49.063455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.177 [2024-11-20 09:10:49.063463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.177 [2024-11-20 09:10:49.068677] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.178 [2024-11-20 09:10:49.068698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.178 [2024-11-20 09:10:49.068706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.178 [2024-11-20 09:10:49.074002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.178 [2024-11-20 09:10:49.074022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.178 [2024-11-20 09:10:49.074030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.178 [2024-11-20 09:10:49.079307] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.178 [2024-11-20 09:10:49.079328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.178 [2024-11-20 09:10:49.079336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.178 [2024-11-20 09:10:49.084611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.178 [2024-11-20 09:10:49.084633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.178 [2024-11-20 09:10:49.084641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.178 [2024-11-20 09:10:49.089911] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.178 [2024-11-20 09:10:49.089933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.178 [2024-11-20 09:10:49.089940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.178 [2024-11-20 09:10:49.095132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.178 [2024-11-20 09:10:49.095154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.178 [2024-11-20 09:10:49.095162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.178 [2024-11-20 09:10:49.100372] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.178 [2024-11-20 09:10:49.100392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.178 [2024-11-20 09:10:49.100400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.178 [2024-11-20 09:10:49.105533] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.178 [2024-11-20 09:10:49.105554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.178 [2024-11-20 09:10:49.105563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.178 [2024-11-20 09:10:49.110688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.178 [2024-11-20 09:10:49.110709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.178 [2024-11-20 09:10:49.110717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.178 [2024-11-20 09:10:49.115878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.178 [2024-11-20 09:10:49.115899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.178 [2024-11-20 09:10:49.115907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.178 [2024-11-20 09:10:49.121099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.178 [2024-11-20 09:10:49.121120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.178 [2024-11-20 09:10:49.121129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.178 [2024-11-20 09:10:49.126367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.178 [2024-11-20 09:10:49.126388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.178 [2024-11-20 09:10:49.126399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.178 [2024-11-20 09:10:49.131574] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.178 [2024-11-20 09:10:49.131595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.178 [2024-11-20 09:10:49.131602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.178 [2024-11-20 09:10:49.136798] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.178 [2024-11-20 09:10:49.136819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.178 [2024-11-20 09:10:49.136827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.178 [2024-11-20 09:10:49.142020] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.178 [2024-11-20 09:10:49.142040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.178 [2024-11-20 09:10:49.142049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.178 [2024-11-20 09:10:49.147337] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.178 [2024-11-20 09:10:49.147358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.178 [2024-11-20 09:10:49.147366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.178 [2024-11-20 09:10:49.152607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.178 [2024-11-20 09:10:49.152629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.178 [2024-11-20 09:10:49.152637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.178 [2024-11-20 09:10:49.157879] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.178 [2024-11-20 09:10:49.157900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.178 [2024-11-20 09:10:49.157908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.178 [2024-11-20 09:10:49.163138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.178 [2024-11-20 09:10:49.163160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.178 [2024-11-20 09:10:49.163168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.178 [2024-11-20 09:10:49.168425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.178 [2024-11-20 09:10:49.168445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.178 [2024-11-20 09:10:49.168453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.178 [2024-11-20 09:10:49.173678] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.178 [2024-11-20 09:10:49.173702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.178 [2024-11-20 09:10:49.173710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.178 [2024-11-20 09:10:49.179057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.178 [2024-11-20 09:10:49.179077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.178 [2024-11-20 09:10:49.179086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.178 [2024-11-20 09:10:49.184268] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.178 [2024-11-20 09:10:49.184290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.178 [2024-11-20 09:10:49.184298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.178 [2024-11-20 09:10:49.189487] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.178 [2024-11-20 09:10:49.189508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.178 [2024-11-20 09:10:49.189516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.178 [2024-11-20 09:10:49.194739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.178 [2024-11-20 09:10:49.194760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.178 [2024-11-20 09:10:49.194767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.178 [2024-11-20 09:10:49.199984] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.178 [2024-11-20 09:10:49.200006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.178 [2024-11-20 09:10:49.200014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.178 [2024-11-20 09:10:49.205313] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.178 [2024-11-20 09:10:49.205334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.178 [2024-11-20 09:10:49.205344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.178 [2024-11-20 09:10:49.210643] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.179 [2024-11-20 09:10:49.210664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.179 [2024-11-20 09:10:49.210672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.438 [2024-11-20 09:10:49.216004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.438 [2024-11-20 09:10:49.216028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.438 [2024-11-20 09:10:49.216037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.438 [2024-11-20 09:10:49.221257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.438 [2024-11-20 09:10:49.221279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.438 [2024-11-20 09:10:49.221288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.438 [2024-11-20 09:10:49.226539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.438 [2024-11-20 09:10:49.226560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.438 [2024-11-20 09:10:49.226568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.438 [2024-11-20 09:10:49.231713] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.438 [2024-11-20 09:10:49.231734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.438 [2024-11-20 09:10:49.231743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.438 [2024-11-20 09:10:49.236921] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.438 [2024-11-20 09:10:49.236942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.438 [2024-11-20 09:10:49.236957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.438 [2024-11-20 09:10:49.242099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.438 [2024-11-20 09:10:49.242121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.438 [2024-11-20 09:10:49.242129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.438 [2024-11-20 09:10:49.247276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.438 [2024-11-20 09:10:49.247297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.438 [2024-11-20 09:10:49.247305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.438 [2024-11-20 09:10:49.252466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.439 [2024-11-20 09:10:49.252488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.439 [2024-11-20 09:10:49.252497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.439 [2024-11-20 09:10:49.257704] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.439 [2024-11-20 09:10:49.257724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.439 [2024-11-20 09:10:49.257732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.439 [2024-11-20 09:10:49.262914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.439 [2024-11-20 09:10:49.262934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.439 [2024-11-20 09:10:49.262946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.439 [2024-11-20 09:10:49.268167] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.439 [2024-11-20 09:10:49.268188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.439 [2024-11-20 09:10:49.268196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.439 [2024-11-20 09:10:49.273461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.439 [2024-11-20 09:10:49.273482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.439 [2024-11-20 09:10:49.273490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.439 [2024-11-20 09:10:49.278627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.439 [2024-11-20 09:10:49.278648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.439 [2024-11-20 09:10:49.278657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.439 [2024-11-20 09:10:49.283829] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.439 [2024-11-20 09:10:49.283851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.439 [2024-11-20 09:10:49.283859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.439 [2024-11-20 09:10:49.289063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.439 [2024-11-20 09:10:49.289084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.439 [2024-11-20 09:10:49.289092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.439 [2024-11-20 09:10:49.294285] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.439 [2024-11-20 09:10:49.294305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.439 [2024-11-20 09:10:49.294313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.439 [2024-11-20 09:10:49.299502] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.439 [2024-11-20 09:10:49.299522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.439 [2024-11-20 09:10:49.299531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.439 [2024-11-20 09:10:49.304822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.439 [2024-11-20 09:10:49.304843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.439 [2024-11-20 09:10:49.304851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.439 [2024-11-20 09:10:49.310086] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.439 [2024-11-20 09:10:49.310110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.439 [2024-11-20 09:10:49.310118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.439 [2024-11-20 09:10:49.315334] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.439 [2024-11-20 09:10:49.315355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.439 [2024-11-20 09:10:49.315363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.439 [2024-11-20 09:10:49.320579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.439 [2024-11-20 09:10:49.320600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.439 [2024-11-20 09:10:49.320608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.439 [2024-11-20 09:10:49.325783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.439 [2024-11-20 09:10:49.325804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.439 [2024-11-20 09:10:49.325812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.439 [2024-11-20 09:10:49.331035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.439 [2024-11-20 09:10:49.331056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.439 [2024-11-20 09:10:49.331064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.439 [2024-11-20 09:10:49.336308] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.439 [2024-11-20 09:10:49.336328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.439 [2024-11-20 09:10:49.336337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.439 [2024-11-20 09:10:49.341535] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.439 [2024-11-20 09:10:49.341556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.439 [2024-11-20 09:10:49.341564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.439 [2024-11-20 09:10:49.347431] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.439 [2024-11-20 09:10:49.347452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.439 [2024-11-20 09:10:49.347461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.439 [2024-11-20 09:10:49.352748] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.439 [2024-11-20 09:10:49.352768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.439 [2024-11-20 09:10:49.352783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.439 [2024-11-20 09:10:49.357969] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.439 [2024-11-20 09:10:49.357990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.439 [2024-11-20 09:10:49.357998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.439 [2024-11-20 09:10:49.363199] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.439 [2024-11-20 09:10:49.363220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.439 [2024-11-20 09:10:49.363228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.439 [2024-11-20 09:10:49.368466] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.439 [2024-11-20 09:10:49.368487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.439 [2024-11-20 09:10:49.368495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.439 [2024-11-20 09:10:49.373755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.439 [2024-11-20 09:10:49.373776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.439 [2024-11-20 09:10:49.373784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.439 [2024-11-20 09:10:49.378994] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.439 [2024-11-20 09:10:49.379014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.439 [2024-11-20 09:10:49.379023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.439 [2024-11-20 09:10:49.384224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.439 [2024-11-20 09:10:49.384246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.439 [2024-11-20 09:10:49.384254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.439 [2024-11-20 09:10:49.389340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.439 [2024-11-20 09:10:49.389361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.440 [2024-11-20 09:10:49.389368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.440 [2024-11-20 09:10:49.394532] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.440 [2024-11-20 09:10:49.394553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.440 [2024-11-20 09:10:49.394560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.440 [2024-11-20 09:10:49.399752] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.440 [2024-11-20 09:10:49.399777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.440 [2024-11-20 09:10:49.399785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.440 [2024-11-20 09:10:49.404966] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.440 [2024-11-20 09:10:49.404987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.440 [2024-11-20 09:10:49.404995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.440 [2024-11-20 09:10:49.410236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.440 [2024-11-20 09:10:49.410256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.440 [2024-11-20 09:10:49.410264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.440 [2024-11-20 09:10:49.415490] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.440 [2024-11-20 09:10:49.415511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.440 [2024-11-20 09:10:49.415519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.440 [2024-11-20 09:10:49.420717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.440 [2024-11-20 09:10:49.420738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.440 [2024-11-20 09:10:49.420746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.440 [2024-11-20 09:10:49.425936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.440 [2024-11-20 09:10:49.425964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.440 [2024-11-20 09:10:49.425972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.440 [2024-11-20 09:10:49.431178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.440 [2024-11-20 09:10:49.431199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.440 [2024-11-20 09:10:49.431207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.440 [2024-11-20 09:10:49.437825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.440 [2024-11-20 09:10:49.437846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.440 [2024-11-20 09:10:49.437853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.440 5602.00 IOPS, 700.25 MiB/s [2024-11-20T08:10:49.481Z] [2024-11-20 09:10:49.443173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.440 [2024-11-20 09:10:49.443195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.440 [2024-11-20 09:10:49.443203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.440 [2024-11-20 09:10:49.448352] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.440 [2024-11-20 09:10:49.448373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.440 [2024-11-20 09:10:49.448381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.440 [2024-11-20 09:10:49.453595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.440 [2024-11-20 09:10:49.453616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.440 [2024-11-20 09:10:49.453624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.440 [2024-11-20 09:10:49.458936] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.440 [2024-11-20 09:10:49.458965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.440 [2024-11-20 09:10:49.458975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.440 [2024-11-20 09:10:49.464278] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.440 [2024-11-20 09:10:49.464301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.440 [2024-11-20 09:10:49.464310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.440 [2024-11-20 09:10:49.469499] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.440 [2024-11-20 09:10:49.469520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.440 [2024-11-20 09:10:49.469529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.440 [2024-11-20 09:10:49.474739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.440 [2024-11-20 09:10:49.474761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.440 [2024-11-20 09:10:49.474770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.700 [2024-11-20 09:10:49.480018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.700 [2024-11-20 09:10:49.480041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.700 [2024-11-20 09:10:49.480049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.700 [2024-11-20 09:10:49.485319] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.700 [2024-11-20 09:10:49.485342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.700 [2024-11-20 09:10:49.485350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.700 [2024-11-20 09:10:49.490563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.700 [2024-11-20 09:10:49.490584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.700 [2024-11-20 09:10:49.490596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.700 [2024-11-20 09:10:49.495796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.700 [2024-11-20 09:10:49.495818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.700 [2024-11-20 09:10:49.495826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.700 [2024-11-20 09:10:49.500988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.700 [2024-11-20 09:10:49.501010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.700 [2024-11-20 09:10:49.501019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.700 [2024-11-20 09:10:49.506236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.700 [2024-11-20 09:10:49.506257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.700 [2024-11-20 09:10:49.506266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.700 [2024-11-20 09:10:49.511456] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.700 [2024-11-20 09:10:49.511477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.700 [2024-11-20 09:10:49.511485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.700 [2024-11-20 09:10:49.516692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.700 [2024-11-20 09:10:49.516714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.700 [2024-11-20 09:10:49.516722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.700 [2024-11-20 09:10:49.521912] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.700 [2024-11-20 09:10:49.521933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.700 [2024-11-20 09:10:49.521942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.700 [2024-11-20 09:10:49.527062] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.700 [2024-11-20 09:10:49.527085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.700 [2024-11-20 09:10:49.527095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.700 [2024-11-20 09:10:49.532323] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.700 [2024-11-20 09:10:49.532346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.700 [2024-11-20 09:10:49.532356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.700 [2024-11-20 09:10:49.537578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.700 [2024-11-20 09:10:49.537605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.700 [2024-11-20 09:10:49.537613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.700 [2024-11-20 09:10:49.542834] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.700 [2024-11-20 09:10:49.542855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.700 [2024-11-20 09:10:49.542863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.700 [2024-11-20 09:10:49.548077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.700 [2024-11-20 09:10:49.548099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.700 [2024-11-20 09:10:49.548107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.700 [2024-11-20 09:10:49.553265] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.700 [2024-11-20 09:10:49.553290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.700 [2024-11-20 09:10:49.553298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.700 [2024-11-20 09:10:49.558478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.700 [2024-11-20 09:10:49.558500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.700 [2024-11-20 09:10:49.558508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.700 [2024-11-20 09:10:49.563768] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.701 [2024-11-20 09:10:49.563789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.701 [2024-11-20 09:10:49.563797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.701 [2024-11-20 09:10:49.568989] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.701 [2024-11-20 09:10:49.569010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.701 [2024-11-20 09:10:49.569018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.701 [2024-11-20 09:10:49.574150] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.701 [2024-11-20 09:10:49.574172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.701 [2024-11-20 09:10:49.574180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.701 [2024-11-20 09:10:49.579342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.701 [2024-11-20 09:10:49.579364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.701 [2024-11-20 09:10:49.579374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.701 [2024-11-20 09:10:49.584646] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.701 [2024-11-20 09:10:49.584668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.701 [2024-11-20 09:10:49.584677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.701 [2024-11-20 09:10:49.589835] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.701 [2024-11-20 09:10:49.589857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.701 [2024-11-20 09:10:49.589866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.701 [2024-11-20 09:10:49.595072] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.701 [2024-11-20 09:10:49.595094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.701 [2024-11-20 09:10:49.595102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.701 [2024-11-20 09:10:49.600322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.701 [2024-11-20 09:10:49.600344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.701 [2024-11-20 09:10:49.600352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.701 [2024-11-20 09:10:49.605476] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.701 [2024-11-20 09:10:49.605498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.701 [2024-11-20 09:10:49.605505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.701 [2024-11-20 09:10:49.610695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.701 [2024-11-20 09:10:49.610718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.701 [2024-11-20 09:10:49.610726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.701 [2024-11-20 09:10:49.615991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.701 [2024-11-20 09:10:49.616013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.701 [2024-11-20 09:10:49.616022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.701 [2024-11-20 09:10:49.621670] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.701 [2024-11-20 09:10:49.621694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.701 [2024-11-20 09:10:49.621702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.701 [2024-11-20 09:10:49.627590] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.701 [2024-11-20 09:10:49.627617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.701 [2024-11-20 09:10:49.627625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.701 [2024-11-20 09:10:49.633882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.701 [2024-11-20 09:10:49.633905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.701 [2024-11-20 09:10:49.633913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.701 [2024-11-20 09:10:49.639914] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.701 [2024-11-20 09:10:49.639936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.701 [2024-11-20 09:10:49.639944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.701 [2024-11-20 09:10:49.645202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.701 [2024-11-20 09:10:49.645224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.701 [2024-11-20 09:10:49.645233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.701 [2024-11-20 09:10:49.650419] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.701 [2024-11-20 09:10:49.650441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.701 [2024-11-20 09:10:49.650449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.701 [2024-11-20 09:10:49.655652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.701 [2024-11-20 09:10:49.655674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.701 [2024-11-20 09:10:49.655682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.701 [2024-11-20 09:10:49.660900] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.701 [2024-11-20 09:10:49.660923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.701 [2024-11-20 09:10:49.660931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.701 [2024-11-20 09:10:49.666165] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.701 [2024-11-20 09:10:49.666186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.701 [2024-11-20 09:10:49.666194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.701 [2024-11-20 09:10:49.671406] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.701 [2024-11-20 09:10:49.671427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.701 [2024-11-20 09:10:49.671435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.701 [2024-11-20 09:10:49.676615] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.701 [2024-11-20 09:10:49.676637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.701 [2024-11-20 09:10:49.676645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.701 [2024-11-20 09:10:49.681755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.701 [2024-11-20 09:10:49.681776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.701 [2024-11-20 09:10:49.681783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.701 [2024-11-20 09:10:49.687042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.701 [2024-11-20 09:10:49.687063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.701 [2024-11-20 09:10:49.687071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.701 [2024-11-20 09:10:49.692328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.701 [2024-11-20 09:10:49.692350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.701 [2024-11-20 09:10:49.692358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.701 [2024-11-20 09:10:49.697581] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.702 [2024-11-20 09:10:49.697603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.702 [2024-11-20 09:10:49.697610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.702 [2024-11-20 09:10:49.702868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.702 [2024-11-20 09:10:49.702890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.702 [2024-11-20 09:10:49.702899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.702 [2024-11-20 09:10:49.707779] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.702 [2024-11-20 09:10:49.707801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.702 [2024-11-20 09:10:49.707809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.702 [2024-11-20 09:10:49.712877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.702 [2024-11-20 09:10:49.712899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.702 [2024-11-20 09:10:49.712908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.702 [2024-11-20 09:10:49.718060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.702 [2024-11-20 09:10:49.718083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.702 [2024-11-20 09:10:49.718098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.702 [2024-11-20 09:10:49.723131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.702 [2024-11-20 09:10:49.723154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.702 [2024-11-20 09:10:49.723162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.702 [2024-11-20 09:10:49.728203] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.702 [2024-11-20 09:10:49.728226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.702 [2024-11-20 09:10:49.728234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.702 [2024-11-20 09:10:49.733429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.702 [2024-11-20 09:10:49.733452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.702 [2024-11-20 09:10:49.733461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.738781] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.738806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.738815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.744027] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.744049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.744058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.749222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.749245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.749253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.754408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.754430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.754439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.759689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.759711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.759720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.764925] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.764957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.764965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.770190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.770212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.770220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.775461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.775483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.775491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.780720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.780741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.780749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.786061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.786084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.786092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.791280] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.791301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.791309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.796503] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.796525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.796533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.801799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.801820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.801828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.807047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.807069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.807077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.812342] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.812363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.812371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.817582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.817603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.817611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.822818] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.822839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.822847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.828063] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.828084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.828092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.833264] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.833286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.833294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.838517] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.838538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.838546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.843767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.843789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.843797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.849060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.849081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.849089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.854291] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.854312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.854323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.859519] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.859541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.859549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.864728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.864749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.864757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.870012] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.870033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:12032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.870042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.875325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.875346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.875354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.880549] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.880571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.880580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.885845] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.885867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:18176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.885875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.891099] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.891121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.891129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.896271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.896294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.896301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.901435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.901456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.901464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.906628] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.906648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.906656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.911909] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.911931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.911939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.917182] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.917203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.917211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.922461] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.922483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.962 [2024-11-20 09:10:49.922491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.962 [2024-11-20 09:10:49.927727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.962 [2024-11-20 09:10:49.927749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.963 [2024-11-20 09:10:49.927757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.963 [2024-11-20 09:10:49.932974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.963 [2024-11-20 09:10:49.932995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.963 [2024-11-20 09:10:49.933003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.963 [2024-11-20 09:10:49.938192] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.963 [2024-11-20 09:10:49.938213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.963 [2024-11-20 09:10:49.938221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.963 [2024-11-20 09:10:49.943421] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.963 [2024-11-20 09:10:49.943442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.963 [2024-11-20 09:10:49.943454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.963 [2024-11-20 09:10:49.948627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.963 [2024-11-20 09:10:49.948648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.963 [2024-11-20 09:10:49.948656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.963 [2024-11-20 09:10:49.953886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.963 [2024-11-20 09:10:49.953907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.963 [2024-11-20 09:10:49.953914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.963 [2024-11-20 09:10:49.959143] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.963 [2024-11-20 09:10:49.959164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.963 [2024-11-20 09:10:49.959172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.963 [2024-11-20 09:10:49.964336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.963 [2024-11-20 09:10:49.964357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.963 [2024-11-20 09:10:49.964365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.963 [2024-11-20 09:10:49.967260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.963 [2024-11-20 09:10:49.967282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.963 [2024-11-20 09:10:49.967290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.963 [2024-11-20 09:10:49.972469] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.963 [2024-11-20 09:10:49.972489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.963 [2024-11-20 09:10:49.972498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.963 [2024-11-20 09:10:49.977624] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.963 [2024-11-20 09:10:49.977647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.963 [2024-11-20 09:10:49.977654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:33.963 [2024-11-20 09:10:49.982808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.963 [2024-11-20 09:10:49.982828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.963 [2024-11-20 09:10:49.982837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:33.963 [2024-11-20 09:10:49.988051] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.963 [2024-11-20 09:10:49.988075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.963 [2024-11-20 09:10:49.988084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:33.963 [2024-11-20 09:10:49.993277] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.963 [2024-11-20 09:10:49.993297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.963 [2024-11-20 09:10:49.993306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:33.963 [2024-11-20 09:10:49.998565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:33.963 [2024-11-20 09:10:49.998587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.963 [2024-11-20 09:10:49.998597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.224 [2024-11-20 09:10:50.003883] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.224 [2024-11-20 09:10:50.003906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.224 [2024-11-20 09:10:50.003918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.224 [2024-11-20 09:10:50.009160] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.224 [2024-11-20 09:10:50.009183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.224 [2024-11-20 09:10:50.009193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.224 [2024-11-20 09:10:50.014358] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.224 [2024-11-20 09:10:50.014380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.224 [2024-11-20 09:10:50.014389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.224 [2024-11-20 09:10:50.019761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.224 [2024-11-20 09:10:50.019783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.224 [2024-11-20 09:10:50.019792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.224 [2024-11-20 09:10:50.025059] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.224 [2024-11-20 09:10:50.025080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.224 [2024-11-20 09:10:50.025089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.224 [2024-11-20 09:10:50.030299] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.224 [2024-11-20 09:10:50.030320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.224 [2024-11-20 09:10:50.030329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.224 [2024-11-20 09:10:50.035611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.224 [2024-11-20 09:10:50.035633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.224 [2024-11-20 09:10:50.035642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.224 [2024-11-20 09:10:50.040940] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.224 [2024-11-20 09:10:50.040969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.224 [2024-11-20 09:10:50.040978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.224 [2024-11-20 09:10:50.046924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.224 [2024-11-20 09:10:50.046952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.224 [2024-11-20 09:10:50.046962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.224 [2024-11-20 09:10:50.052246] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.224 [2024-11-20 09:10:50.052267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.224 [2024-11-20 09:10:50.052276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.224 [2024-11-20 09:10:50.057554] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.224 [2024-11-20 09:10:50.057575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.224 [2024-11-20 09:10:50.057583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.224 [2024-11-20 09:10:50.062872] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.224 [2024-11-20 09:10:50.062892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.224 [2024-11-20 09:10:50.062901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.224 [2024-11-20 09:10:50.068202] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.224 [2024-11-20 09:10:50.068223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.224 [2024-11-20 09:10:50.068231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.224 [2024-11-20 09:10:50.073506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.224 [2024-11-20 09:10:50.073527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.224 [2024-11-20 09:10:50.073536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.224 [2024-11-20 09:10:50.078714] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.224 [2024-11-20 09:10:50.078734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.224 [2024-11-20 09:10:50.078747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.224 [2024-11-20 09:10:50.084042] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.224 [2024-11-20 09:10:50.084063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.224 [2024-11-20 09:10:50.084072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.224 [2024-11-20 09:10:50.089513] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.224 [2024-11-20 09:10:50.089534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.224 [2024-11-20 09:10:50.089553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.224 [2024-11-20 09:10:50.094878] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.224 [2024-11-20 09:10:50.094899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.224 [2024-11-20 09:10:50.094907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.225 [2024-11-20 09:10:50.100137] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.225 [2024-11-20 09:10:50.100157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.225 [2024-11-20 09:10:50.100166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.225 [2024-11-20 09:10:50.105368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.225 [2024-11-20 09:10:50.105389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.225 [2024-11-20 09:10:50.105399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.225 [2024-11-20 09:10:50.110647] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.225 [2024-11-20 09:10:50.110668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.225 [2024-11-20 09:10:50.110676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.225 [2024-11-20 09:10:50.115869] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.225 [2024-11-20 09:10:50.115889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.225 [2024-11-20 09:10:50.115897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.225 [2024-11-20 09:10:50.121060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.225 [2024-11-20 09:10:50.121081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.225 [2024-11-20 09:10:50.121089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.225 [2024-11-20 09:10:50.126349] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.225 [2024-11-20 09:10:50.126373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.225 [2024-11-20 09:10:50.126382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.225 [2024-11-20 09:10:50.131569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.225 [2024-11-20 09:10:50.131590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.225 [2024-11-20 09:10:50.131598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.225 [2024-11-20 09:10:50.136836] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.225 [2024-11-20 09:10:50.136858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.225 [2024-11-20 09:10:50.136866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.225 [2024-11-20 09:10:50.142132] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.225 [2024-11-20 09:10:50.142152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.225 [2024-11-20 09:10:50.142160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.225 [2024-11-20 09:10:50.147388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.225 [2024-11-20 09:10:50.147409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.225 [2024-11-20 09:10:50.147417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.225 [2024-11-20 09:10:50.152578] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.225 [2024-11-20 09:10:50.152599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.225 [2024-11-20 09:10:50.152607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.225 [2024-11-20 09:10:50.158159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.225 [2024-11-20 09:10:50.158182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.225 [2024-11-20 09:10:50.158190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.225 [2024-11-20 09:10:50.163565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.225 [2024-11-20 09:10:50.163586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.225 [2024-11-20 09:10:50.163594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.225 [2024-11-20 09:10:50.169018] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.225 [2024-11-20 09:10:50.169039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.225 [2024-11-20 09:10:50.169048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.225 [2024-11-20 09:10:50.174433] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.225 [2024-11-20 09:10:50.174453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.225 [2024-11-20 09:10:50.174461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.225 [2024-11-20 09:10:50.180094] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.225 [2024-11-20 09:10:50.180114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.225 [2024-11-20 09:10:50.180122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.225 [2024-11-20 09:10:50.185668] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.225 [2024-11-20 09:10:50.185689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.225 [2024-11-20 09:10:50.185698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.225 [2024-11-20 09:10:50.191044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.225 [2024-11-20 09:10:50.191065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.225 [2024-11-20 09:10:50.191073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.225 [2024-11-20 09:10:50.196760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.225 [2024-11-20 09:10:50.196782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.225 [2024-11-20 09:10:50.196790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.225 [2024-11-20 09:10:50.202400] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.225 [2024-11-20 09:10:50.202421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.225 [2024-11-20 09:10:50.202429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.225 [2024-11-20 09:10:50.207971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.225 [2024-11-20 09:10:50.207992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.225 [2024-11-20 09:10:50.208002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.225 [2024-11-20 09:10:50.213564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.225 [2024-11-20 09:10:50.213585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.225 [2024-11-20 09:10:50.213593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.225 [2024-11-20 09:10:50.219210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.225 [2024-11-20 09:10:50.219232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.225 [2024-11-20 09:10:50.219243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.225 [2024-11-20 09:10:50.224726] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.225 [2024-11-20 09:10:50.224746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.225 [2024-11-20 09:10:50.224755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.225 [2024-11-20 09:10:50.230600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.225 [2024-11-20 09:10:50.230621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.225 [2024-11-20 09:10:50.230629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.225 [2024-11-20 09:10:50.237335] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.225 [2024-11-20 09:10:50.237356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.225 [2024-11-20 09:10:50.237365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.225 [2024-11-20 09:10:50.245525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.225 [2024-11-20 09:10:50.245545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.226 [2024-11-20 09:10:50.245554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.226 [2024-11-20 09:10:50.251686] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.226 [2024-11-20 09:10:50.251708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.226 [2024-11-20 09:10:50.251716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.226 [2024-11-20 09:10:50.259363] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.226 [2024-11-20 09:10:50.259386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.226 [2024-11-20 09:10:50.259395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.486 [2024-11-20 09:10:50.266942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.486 [2024-11-20 09:10:50.266970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.486 [2024-11-20 09:10:50.266979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.486 [2024-11-20 09:10:50.274077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.486 [2024-11-20 09:10:50.274099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.486 [2024-11-20 09:10:50.274108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.486 [2024-11-20 09:10:50.281498] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.486 [2024-11-20 09:10:50.281520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.486 [2024-11-20 09:10:50.281528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.486 [2024-11-20 09:10:50.289343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.486 [2024-11-20 09:10:50.289365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.486 [2024-11-20 09:10:50.289385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.486 [2024-11-20 09:10:50.297472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.486 [2024-11-20 09:10:50.297493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.486 [2024-11-20 09:10:50.297501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.486 [2024-11-20 09:10:50.304822] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.486 [2024-11-20 09:10:50.304844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.486 [2024-11-20 09:10:50.304853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.486 [2024-11-20 09:10:50.313039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.486 [2024-11-20 09:10:50.313062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.486 [2024-11-20 09:10:50.313070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.486 [2024-11-20 09:10:50.319667] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.486 [2024-11-20 09:10:50.319689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.486 [2024-11-20 09:10:50.319698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.486 [2024-11-20 09:10:50.326740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.486 [2024-11-20 09:10:50.326763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.486 [2024-11-20 09:10:50.326771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.486 [2024-11-20 09:10:50.333703] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.486 [2024-11-20 09:10:50.333724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.486 [2024-11-20 09:10:50.333733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.486 [2024-11-20 09:10:50.339271] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.486 [2024-11-20 09:10:50.339293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.487 [2024-11-20 09:10:50.339306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.487 [2024-11-20 09:10:50.344727] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.487 [2024-11-20 09:10:50.344749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.487 [2024-11-20 09:10:50.344757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.487 [2024-11-20 09:10:50.350310] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.487 [2024-11-20 09:10:50.350334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.487 [2024-11-20 09:10:50.350342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.487 [2024-11-20 09:10:50.356221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.487 [2024-11-20 09:10:50.356244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.487 [2024-11-20 09:10:50.356252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.487 [2024-11-20 09:10:50.361339] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.487 [2024-11-20 09:10:50.361361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.487 [2024-11-20 09:10:50.361369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.487 [2024-11-20 09:10:50.366384] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.487 [2024-11-20 09:10:50.366406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.487 [2024-11-20 09:10:50.366415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.487 [2024-11-20 09:10:50.371569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.487 [2024-11-20 09:10:50.371590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.487 [2024-11-20 09:10:50.371598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.487 [2024-11-20 09:10:50.377069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.487 [2024-11-20 09:10:50.377090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.487 [2024-11-20 09:10:50.377109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.487 [2024-11-20 09:10:50.382634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.487 [2024-11-20 09:10:50.382656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.487 [2024-11-20 09:10:50.382664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.487 [2024-11-20 09:10:50.388163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.487 [2024-11-20 09:10:50.388205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.487 [2024-11-20 09:10:50.388213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.487 [2024-11-20 09:10:50.393691] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.487 [2024-11-20 09:10:50.393712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.487 [2024-11-20 09:10:50.393720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.487 [2024-11-20 09:10:50.399216] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.487 [2024-11-20 09:10:50.399238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.487 [2024-11-20 09:10:50.399246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.487 [2024-11-20 09:10:50.404689] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.487 [2024-11-20 09:10:50.404711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.487 [2024-11-20 09:10:50.404719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.487 [2024-11-20 09:10:50.410191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.487 [2024-11-20 09:10:50.410213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.487 [2024-11-20 09:10:50.410221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.487 [2024-11-20 09:10:50.415745] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.487 [2024-11-20 09:10:50.415766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.487 [2024-11-20 09:10:50.415775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.487 [2024-11-20 09:10:50.421103] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.487 [2024-11-20 09:10:50.421125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.487 [2024-11-20 09:10:50.421133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:34.487 [2024-11-20 09:10:50.426568] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.487 [2024-11-20 09:10:50.426590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.487 [2024-11-20 09:10:50.426598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:34.487 [2024-11-20 09:10:50.431942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.487 [2024-11-20 09:10:50.431968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.487 [2024-11-20 09:10:50.431976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:34.487 [2024-11-20 09:10:50.437617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1382a30) 00:26:34.487 [2024-11-20 09:10:50.437638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.487 [2024-11-20 09:10:50.437646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:34.487 5644.00 IOPS, 705.50 MiB/s 00:26:34.487 Latency(us) 00:26:34.487 [2024-11-20T08:10:50.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.487 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:34.487 nvme0n1 : 2.00 5642.19 705.27 0.00 0.00 2832.88 644.67 8377.21 00:26:34.487 [2024-11-20T08:10:50.528Z] =================================================================================================================== 00:26:34.487 [2024-11-20T08:10:50.528Z] Total : 5642.19 705.27 0.00 0.00 2832.88 644.67 8377.21 00:26:34.487 { 00:26:34.487 "results": [ 00:26:34.487 { 00:26:34.487 "job": "nvme0n1", 00:26:34.487 "core_mask": "0x2", 00:26:34.487 "workload": "randread", 00:26:34.487 "status": "finished", 00:26:34.487 "queue_depth": 16, 00:26:34.487 "io_size": 131072, 00:26:34.487 "runtime": 2.003478, 00:26:34.487 "iops": 5642.188234659927, 00:26:34.487 "mibps": 705.2735293324909, 00:26:34.487 "io_failed": 0, 00:26:34.487 "io_timeout": 0, 00:26:34.487 "avg_latency_us": 2832.8820259084896, 00:26:34.487 "min_latency_us": 644.6747826086956, 00:26:34.487 "max_latency_us": 8377.210434782608 00:26:34.487 } 00:26:34.487 ], 00:26:34.487 "core_count": 1 00:26:34.487 } 00:26:34.487 09:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:34.487 09:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:34.487 09:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:34.487 | .driver_specific 00:26:34.487 | .nvme_error 00:26:34.487 | .status_code 00:26:34.487 | .command_transient_transport_error' 00:26:34.487 09:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:34.747 09:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 365 > 0 )) 00:26:34.747 09:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2487663 00:26:34.747 09:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2487663 ']' 00:26:34.747 09:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2487663 00:26:34.747 09:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:34.747 09:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:34.747 09:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2487663 00:26:34.747 09:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:34.747 09:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:34.747 09:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2487663' 00:26:34.747 killing process with pid 2487663 00:26:34.747 09:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2487663 00:26:34.747 Received shutdown signal, test time was about 2.000000 seconds 00:26:34.747 00:26:34.747 Latency(us) 00:26:34.747 [2024-11-20T08:10:50.788Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.747 [2024-11-20T08:10:50.788Z] =================================================================================================================== 00:26:34.747 [2024-11-20T08:10:50.788Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:34.747 09:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2487663 00:26:35.006 09:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:35.006 09:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:35.006 09:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:35.006 09:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:35.006 09:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:35.006 09:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2488134 00:26:35.006 09:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2488134 /var/tmp/bperf.sock 00:26:35.006 09:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2488134 ']' 00:26:35.006 09:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:35.006 09:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:35.006 09:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:35.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:35.006 09:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:35.006 09:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.006 09:10:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:35.006 [2024-11-20 09:10:50.934043] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:26:35.006 [2024-11-20 09:10:50.934091] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2488134 ] 00:26:35.006 [2024-11-20 09:10:51.012027] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.264 [2024-11-20 09:10:51.050399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:35.264 09:10:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:35.264 09:10:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:35.264 09:10:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:35.264 09:10:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:35.521 09:10:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:35.521 09:10:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.521 09:10:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.521 09:10:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.521 09:10:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:35.521 09:10:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:35.781 nvme0n1 00:26:35.781 09:10:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:35.781 09:10:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:35.781 09:10:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.781 09:10:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:35.781 09:10:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:35.781 09:10:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:36.040 Running I/O for 2 seconds... 00:26:36.041 [2024-11-20 09:10:51.894606] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166e12d8 00:26:36.041 [2024-11-20 09:10:51.895528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11396 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.041 [2024-11-20 09:10:51.895560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.041 [2024-11-20 09:10:51.904205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166fda78 00:26:36.041 [2024-11-20 09:10:51.904899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:6407 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.041 [2024-11-20 09:10:51.904922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.041 [2024-11-20 09:10:51.912633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166ee190 00:26:36.041 [2024-11-20 09:10:51.913451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:10580 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.041 [2024-11-20 09:10:51.913471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:36.041 [2024-11-20 09:10:51.921744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166fd640 00:26:36.041 [2024-11-20 09:10:51.922562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:20043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.041 [2024-11-20 09:10:51.922582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:26:36.041 [2024-11-20 09:10:51.931336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166e01f8 00:26:36.041 [2024-11-20 09:10:51.931898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:21434 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.041 [2024-11-20 09:10:51.931919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:26:36.041 [2024-11-20 09:10:51.940063] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166fb480 00:26:36.041 [2024-11-20 09:10:51.940961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:4794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.041 [2024-11-20 09:10:51.940980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:26:36.041 [2024-11-20 09:10:51.951625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166e1b48 00:26:36.041 [2024-11-20 09:10:51.953026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:15706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.041 [2024-11-20 09:10:51.953046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:26:36.041 [2024-11-20 09:10:51.958375] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166eaef0 00:26:36.041 [2024-11-20 09:10:51.959037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:242 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.041 [2024-11-20 09:10:51.959056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:36.041 [2024-11-20 09:10:51.969754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166ff3c8 00:26:36.041 [2024-11-20 09:10:51.971052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:12170 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.041 [2024-11-20 09:10:51.971071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:36.041 [2024-11-20 09:10:51.978273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166ee5c8 00:26:36.041 [2024-11-20 09:10:51.979544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:2976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.041 [2024-11-20 09:10:51.979564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:36.041 [2024-11-20 09:10:51.986225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166eee38 00:26:36.041 [2024-11-20 09:10:51.986909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:24774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.041 [2024-11-20 09:10:51.986928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:36.041 [2024-11-20 09:10:51.997663] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166ec408 00:26:36.041 [2024-11-20 09:10:51.998848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:6758 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.041 [2024-11-20 09:10:51.998867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:36.041 [2024-11-20 09:10:52.007052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166f5be8 00:26:36.041 [2024-11-20 09:10:52.007764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:18713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.041 [2024-11-20 09:10:52.007783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:36.041 [2024-11-20 09:10:52.015785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166e7c50 00:26:36.041 [2024-11-20 09:10:52.016852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:24023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.041 [2024-11-20 09:10:52.016871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:36.041 [2024-11-20 09:10:52.025151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166f6020 00:26:36.041 [2024-11-20 09:10:52.025742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:19859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.041 [2024-11-20 09:10:52.025762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:36.041 [2024-11-20 09:10:52.033798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166ebfd0 00:26:36.041 [2024-11-20 09:10:52.034338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.041 [2024-11-20 09:10:52.034361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:36.041 [2024-11-20 09:10:52.043119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166eaef0 00:26:36.041 [2024-11-20 09:10:52.043965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5816 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.041 [2024-11-20 09:10:52.043984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:26:36.041 [2024-11-20 09:10:52.054730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166fc560 00:26:36.041 [2024-11-20 09:10:52.056265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:14898 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.041 [2024-11-20 09:10:52.056284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:26:36.041 [2024-11-20 09:10:52.061180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166feb58 00:26:36.041 [2024-11-20 09:10:52.061867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:12782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.041 [2024-11-20 09:10:52.061885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:36.041 [2024-11-20 09:10:52.070874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166f35f0 00:26:36.041 [2024-11-20 09:10:52.071870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.041 [2024-11-20 09:10:52.071889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:26:36.301 [2024-11-20 09:10:52.082468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166ed4e8 00:26:36.301 [2024-11-20 09:10:52.083981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:2542 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.301 [2024-11-20 09:10:52.084002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:36.301 [2024-11-20 09:10:52.089199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166fe2e8 00:26:36.301 [2024-11-20 09:10:52.089909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.301 [2024-11-20 09:10:52.089929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:36.301 [2024-11-20 09:10:52.098789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166f5378 00:26:36.301 [2024-11-20 09:10:52.099660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:4658 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.301 [2024-11-20 09:10:52.099679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:26:36.301 [2024-11-20 09:10:52.110228] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166e8088 00:26:36.301 [2024-11-20 09:10:52.111578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:5930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.301 [2024-11-20 09:10:52.111597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:26:36.301 [2024-11-20 09:10:52.119533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166e12d8 00:26:36.301 [2024-11-20 09:10:52.120890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.301 [2024-11-20 09:10:52.120908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:36.301 [2024-11-20 09:10:52.126294] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166fc128 00:26:36.301 [2024-11-20 09:10:52.127051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:14843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.301 [2024-11-20 09:10:52.127070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:36.301 [2024-11-20 09:10:52.135890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166e7818 00:26:36.301 [2024-11-20 09:10:52.136781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.301 [2024-11-20 09:10:52.136799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:26:36.301 [2024-11-20 09:10:52.145273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166dfdc0 00:26:36.301 [2024-11-20 09:10:52.145697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:13277 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.301 [2024-11-20 09:10:52.145717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:26:36.301 [2024-11-20 09:10:52.157114] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166eaab8 00:26:36.301 [2024-11-20 09:10:52.158590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:21269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.301 [2024-11-20 09:10:52.158609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:26:36.301 [2024-11-20 09:10:52.163566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166e0630 00:26:36.301 [2024-11-20 09:10:52.164193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:23160 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.301 [2024-11-20 09:10:52.164211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:26:36.301 [2024-11-20 09:10:52.173395] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166eb328 00:26:36.301 [2024-11-20 09:10:52.174299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:23251 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.301 [2024-11-20 09:10:52.174318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:26:36.301 [2024-11-20 09:10:52.184911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166fda78 00:26:36.301 [2024-11-20 09:10:52.186361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:10961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.301 [2024-11-20 09:10:52.186380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:36.301 [2024-11-20 09:10:52.194309] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166f0bc0 00:26:36.301 [2024-11-20 09:10:52.195686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:12799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.301 [2024-11-20 09:10:52.195705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:36.301 [2024-11-20 09:10:52.201073] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166df550 00:26:36.301 [2024-11-20 09:10:52.201865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:11061 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.301 [2024-11-20 09:10:52.201883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:26:36.301 [2024-11-20 09:10:52.210391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166fe2e8 00:26:36.301 [2024-11-20 09:10:52.211162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1238 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.301 [2024-11-20 09:10:52.211181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:26:36.301 [2024-11-20 09:10:52.221197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166f57b0 00:26:36.301 [2024-11-20 09:10:52.222354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:10287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.301 [2024-11-20 09:10:52.222373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:26:36.301 [2024-11-20 09:10:52.228040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166fc998 00:26:36.301 [2024-11-20 09:10:52.228722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:22635 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.301 [2024-11-20 09:10:52.228740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:36.302 [2024-11-20 09:10:52.239548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166f2948 00:26:36.302 [2024-11-20 09:10:52.240734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:14451 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.302 [2024-11-20 09:10:52.240753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:36.302 [2024-11-20 09:10:52.248984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166e1f80 00:26:36.302 [2024-11-20 09:10:52.249694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:21009 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.302 [2024-11-20 09:10:52.249713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:36.302 [2024-11-20 09:10:52.257623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166fd640 00:26:36.302 [2024-11-20 09:10:52.258905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8143 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.302 [2024-11-20 09:10:52.258924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:26:36.302 [2024-11-20 09:10:52.265493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166f6020 00:26:36.302 [2024-11-20 09:10:52.266180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:8655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.302 [2024-11-20 09:10:52.266199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:36.302 [2024-11-20 09:10:52.275722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166ec840 00:26:36.302 [2024-11-20 09:10:52.276539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:12575 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.302 [2024-11-20 09:10:52.276565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:36.302 [2024-11-20 09:10:52.284278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166eaab8 00:26:36.302 [2024-11-20 09:10:52.285122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:17358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.302 [2024-11-20 09:10:52.285142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:26:36.302 [2024-11-20 09:10:52.296290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166ec840 00:26:36.302 [2024-11-20 09:10:52.297799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.302 [2024-11-20 09:10:52.297818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:26:36.302 [2024-11-20 09:10:52.302745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166e3498 00:26:36.302 [2024-11-20 09:10:52.303426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:17613 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.302 [2024-11-20 09:10:52.303445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:26:36.302 [2024-11-20 09:10:52.312045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166e4578 00:26:36.302 [2024-11-20 09:10:52.312741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23769 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.302 [2024-11-20 09:10:52.312760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:36.302 [2024-11-20 09:10:52.321231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166f6020 00:26:36.302 [2024-11-20 09:10:52.321926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13705 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.302 [2024-11-20 09:10:52.321944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:36.302 [2024-11-20 09:10:52.330388] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166fbcf0 00:26:36.302 [2024-11-20 09:10:52.331106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:20384 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.302 [2024-11-20 09:10:52.331125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:36.562 [2024-11-20 09:10:52.339760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166f3a28 00:26:36.562 [2024-11-20 09:10:52.340480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.562 [2024-11-20 09:10:52.340501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:36.562 [2024-11-20 09:10:52.349102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166f1868 00:26:36.562 [2024-11-20 09:10:52.349846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:8690 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.562 [2024-11-20 09:10:52.349867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:36.562 [2024-11-20 09:10:52.358308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166f9f68 00:26:36.562 [2024-11-20 09:10:52.359041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:12014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.562 [2024-11-20 09:10:52.359060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:26:36.562 [2024-11-20 09:10:52.366905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166ec408 00:26:36.562 [2024-11-20 09:10:52.367638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:11248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.562 [2024-11-20 09:10:52.367658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:26:36.562 [2024-11-20 09:10:52.377293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166f35f0 00:26:36.562 [2024-11-20 09:10:52.378121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:25108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.562 [2024-11-20 09:10:52.378142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:36.562 [2024-11-20 09:10:52.386838] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166f9b30 00:26:36.562 [2024-11-20 09:10:52.387790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.562 [2024-11-20 09:10:52.387810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:26:36.562 [2024-11-20 09:10:52.395975] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166f9f68 00:26:36.562 [2024-11-20 09:10:52.396974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:20951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.562 [2024-11-20 09:10:52.396994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:26:36.562 [2024-11-20 09:10:52.405478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166e0a68 00:26:36.562 [2024-11-20 09:10:52.406091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:2752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.562 [2024-11-20 09:10:52.406111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:26:36.562 [2024-11-20 09:10:52.414605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166e1f80 00:26:36.562 [2024-11-20 09:10:52.415488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:15254 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.562 [2024-11-20 09:10:52.415507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:26:36.562 [2024-11-20 09:10:52.424166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166ea680 00:26:36.562 [2024-11-20 09:10:52.425179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.562 [2024-11-20 09:10:52.425198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:26:36.562 [2024-11-20 09:10:52.433576] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166ea248 00:26:36.562 [2024-11-20 09:10:52.434096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6225 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.562 [2024-11-20 09:10:52.434115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:36.562 [2024-11-20 09:10:52.442936] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166f81e0 00:26:36.562 [2024-11-20 09:10:52.443797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:14423 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.562 [2024-11-20 09:10:52.443816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:26:36.562 [2024-11-20 09:10:52.452368] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166e5ec8 00:26:36.562 [2024-11-20 09:10:52.453030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:287 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.562 [2024-11-20 09:10:52.453049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:36.562 [2024-11-20 09:10:52.462990] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166f1430 00:26:36.562 [2024-11-20 09:10:52.464446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:19164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.562 [2024-11-20 09:10:52.464464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:26:36.562 [2024-11-20 09:10:52.471427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166ddc00 00:26:36.562 [2024-11-20 09:10:52.472450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:22167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.562 [2024-11-20 09:10:52.472470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:36.562 [2024-11-20 09:10:52.480533] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166dfdc0 00:26:36.562 [2024-11-20 09:10:52.481518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:24748 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.562 [2024-11-20 09:10:52.481537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:26:36.562 [2024-11-20 09:10:52.489172] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166f96f8 00:26:36.562 [2024-11-20 09:10:52.490117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1860 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.562 [2024-11-20 09:10:52.490137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:26:36.562 [2024-11-20 09:10:52.498793] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166f2d80 00:26:36.562 [2024-11-20 09:10:52.499908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:2119 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.562 [2024-11-20 09:10:52.499927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:26:36.562 [2024-11-20 09:10:52.508633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166dece0 00:26:36.562 [2024-11-20 09:10:52.509641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:13589 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.562 [2024-11-20 09:10:52.509661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:26:36.562 [2024-11-20 09:10:52.518218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166f6cc8 00:26:36.562 [2024-11-20 09:10:52.519422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:20838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.562 [2024-11-20 09:10:52.519445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:26:36.562 [2024-11-20 09:10:52.526512] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.562 [2024-11-20 09:10:52.526692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.562 [2024-11-20 09:10:52.526711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.562 [2024-11-20 09:10:52.536200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.562 [2024-11-20 09:10:52.536362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.562 [2024-11-20 09:10:52.536380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.562 [2024-11-20 09:10:52.545796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.562 [2024-11-20 09:10:52.545963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.562 [2024-11-20 09:10:52.545981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.562 [2024-11-20 09:10:52.555411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.563 [2024-11-20 09:10:52.555573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.563 [2024-11-20 09:10:52.555591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.563 [2024-11-20 09:10:52.565015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.563 [2024-11-20 09:10:52.565177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14751 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.563 [2024-11-20 09:10:52.565195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.563 [2024-11-20 09:10:52.574607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.563 [2024-11-20 09:10:52.574770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:321 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.563 [2024-11-20 09:10:52.574787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.563 [2024-11-20 09:10:52.584399] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.563 [2024-11-20 09:10:52.584580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17747 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.563 [2024-11-20 09:10:52.584599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.563 [2024-11-20 09:10:52.594072] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.563 [2024-11-20 09:10:52.594235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.563 [2024-11-20 09:10:52.594253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.822 [2024-11-20 09:10:52.604019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.822 [2024-11-20 09:10:52.604215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13036 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.822 [2024-11-20 09:10:52.604235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.822 [2024-11-20 09:10:52.613702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.822 [2024-11-20 09:10:52.613863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.822 [2024-11-20 09:10:52.613882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.822 [2024-11-20 09:10:52.623310] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.822 [2024-11-20 09:10:52.623471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13019 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.822 [2024-11-20 09:10:52.623489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.822 [2024-11-20 09:10:52.632907] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.822 [2024-11-20 09:10:52.633077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10708 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.822 [2024-11-20 09:10:52.633096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.822 [2024-11-20 09:10:52.642501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.822 [2024-11-20 09:10:52.642665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13058 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.822 [2024-11-20 09:10:52.642683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.822 [2024-11-20 09:10:52.652097] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.822 [2024-11-20 09:10:52.652259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15381 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.822 [2024-11-20 09:10:52.652277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.822 [2024-11-20 09:10:52.662062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.822 [2024-11-20 09:10:52.662222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.822 [2024-11-20 09:10:52.662240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.822 [2024-11-20 09:10:52.671668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.822 [2024-11-20 09:10:52.671830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20594 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.822 [2024-11-20 09:10:52.671863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.822 [2024-11-20 09:10:52.681345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.822 [2024-11-20 09:10:52.681507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.822 [2024-11-20 09:10:52.681525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.822 [2024-11-20 09:10:52.691032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.822 [2024-11-20 09:10:52.691199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4838 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.822 [2024-11-20 09:10:52.691217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.822 [2024-11-20 09:10:52.700628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.823 [2024-11-20 09:10:52.700787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24728 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.823 [2024-11-20 09:10:52.700805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.823 [2024-11-20 09:10:52.710227] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.823 [2024-11-20 09:10:52.710389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3849 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.823 [2024-11-20 09:10:52.710407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.823 [2024-11-20 09:10:52.719868] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.823 [2024-11-20 09:10:52.720038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.823 [2024-11-20 09:10:52.720057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.823 [2024-11-20 09:10:52.729498] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.823 [2024-11-20 09:10:52.729658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2559 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.823 [2024-11-20 09:10:52.729676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.823 [2024-11-20 09:10:52.739105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.823 [2024-11-20 09:10:52.739267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.823 [2024-11-20 09:10:52.739284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.823 [2024-11-20 09:10:52.748705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.823 [2024-11-20 09:10:52.748866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5988 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.823 [2024-11-20 09:10:52.748883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.823 [2024-11-20 09:10:52.758317] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.823 [2024-11-20 09:10:52.758503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.823 [2024-11-20 09:10:52.758520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.823 [2024-11-20 09:10:52.767987] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.823 [2024-11-20 09:10:52.768170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18754 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.823 [2024-11-20 09:10:52.768189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.823 [2024-11-20 09:10:52.777618] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.823 [2024-11-20 09:10:52.777779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20278 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.823 [2024-11-20 09:10:52.777796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.823 [2024-11-20 09:10:52.787312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.823 [2024-11-20 09:10:52.787473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8954 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.823 [2024-11-20 09:10:52.787492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.823 [2024-11-20 09:10:52.796892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.823 [2024-11-20 09:10:52.797065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.823 [2024-11-20 09:10:52.797083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.823 [2024-11-20 09:10:52.806503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.823 [2024-11-20 09:10:52.806663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.823 [2024-11-20 09:10:52.806680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.823 [2024-11-20 09:10:52.816102] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.823 [2024-11-20 09:10:52.816262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:268 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.823 [2024-11-20 09:10:52.816279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.823 [2024-11-20 09:10:52.825697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.823 [2024-11-20 09:10:52.825856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7409 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.823 [2024-11-20 09:10:52.825874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.823 [2024-11-20 09:10:52.835296] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.823 [2024-11-20 09:10:52.835455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2768 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.823 [2024-11-20 09:10:52.835472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.823 [2024-11-20 09:10:52.844879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.823 [2024-11-20 09:10:52.845053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9005 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.823 [2024-11-20 09:10:52.845071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:36.823 [2024-11-20 09:10:52.854475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:36.823 [2024-11-20 09:10:52.854657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22866 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:36.823 [2024-11-20 09:10:52.854679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.082 [2024-11-20 09:10:52.864434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.082 [2024-11-20 09:10:52.864617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.082 [2024-11-20 09:10:52.864637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.082 [2024-11-20 09:10:52.874103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.082 [2024-11-20 09:10:52.874264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7674 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.082 [2024-11-20 09:10:52.874283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.082 26946.00 IOPS, 105.26 MiB/s [2024-11-20T08:10:53.123Z] [2024-11-20 09:10:52.883692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.082 [2024-11-20 09:10:52.883855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.082 [2024-11-20 09:10:52.883873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.082 [2024-11-20 09:10:52.893565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.082 [2024-11-20 09:10:52.893726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2130 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.082 [2024-11-20 09:10:52.893744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.082 [2024-11-20 09:10:52.903150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.082 [2024-11-20 09:10:52.903312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.082 [2024-11-20 09:10:52.903330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.082 [2024-11-20 09:10:52.912958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.082 [2024-11-20 09:10:52.913121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10871 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.082 [2024-11-20 09:10:52.913140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.082 [2024-11-20 09:10:52.922542] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.082 [2024-11-20 09:10:52.922704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.082 [2024-11-20 09:10:52.922722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.082 [2024-11-20 09:10:52.932126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.082 [2024-11-20 09:10:52.932289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.082 [2024-11-20 09:10:52.932307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.082 [2024-11-20 09:10:52.941712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.082 [2024-11-20 09:10:52.941876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.082 [2024-11-20 09:10:52.941893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.082 [2024-11-20 09:10:52.951381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.082 [2024-11-20 09:10:52.951542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16153 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.082 [2024-11-20 09:10:52.951561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.082 [2024-11-20 09:10:52.960984] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.082 [2024-11-20 09:10:52.961146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3995 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.082 [2024-11-20 09:10:52.961163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.082 [2024-11-20 09:10:52.970583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.082 [2024-11-20 09:10:52.970746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19615 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.082 [2024-11-20 09:10:52.970763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.082 [2024-11-20 09:10:52.980205] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.082 [2024-11-20 09:10:52.980368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.082 [2024-11-20 09:10:52.980385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.082 [2024-11-20 09:10:52.989883] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.083 [2024-11-20 09:10:52.990051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15843 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.083 [2024-11-20 09:10:52.990070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.083 [2024-11-20 09:10:52.999469] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.083 [2024-11-20 09:10:52.999628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.083 [2024-11-20 09:10:52.999646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.083 [2024-11-20 09:10:53.009059] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.083 [2024-11-20 09:10:53.009221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.083 [2024-11-20 09:10:53.009239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.083 [2024-11-20 09:10:53.018655] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.083 [2024-11-20 09:10:53.018815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2822 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.083 [2024-11-20 09:10:53.018832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.083 [2024-11-20 09:10:53.028221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.083 [2024-11-20 09:10:53.028382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.083 [2024-11-20 09:10:53.028400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.083 [2024-11-20 09:10:53.037896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.083 [2024-11-20 09:10:53.038093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12854 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.083 [2024-11-20 09:10:53.038113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.083 [2024-11-20 09:10:53.047575] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.083 [2024-11-20 09:10:53.047736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17424 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.083 [2024-11-20 09:10:53.047754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.083 [2024-11-20 09:10:53.057150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.083 [2024-11-20 09:10:53.057311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.083 [2024-11-20 09:10:53.057327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.083 [2024-11-20 09:10:53.066745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.083 [2024-11-20 09:10:53.066906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20980 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.083 [2024-11-20 09:10:53.066923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.083 [2024-11-20 09:10:53.076331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.083 [2024-11-20 09:10:53.076490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.083 [2024-11-20 09:10:53.076507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.083 [2024-11-20 09:10:53.085969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.083 [2024-11-20 09:10:53.086149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19377 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.083 [2024-11-20 09:10:53.086168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.083 [2024-11-20 09:10:53.095627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.083 [2024-11-20 09:10:53.095789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.083 [2024-11-20 09:10:53.095806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.083 [2024-11-20 09:10:53.105211] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.083 [2024-11-20 09:10:53.105373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.083 [2024-11-20 09:10:53.105394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.083 [2024-11-20 09:10:53.114796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.083 [2024-11-20 09:10:53.114960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.083 [2024-11-20 09:10:53.114978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.342 [2024-11-20 09:10:53.124766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.342 [2024-11-20 09:10:53.124931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.342 [2024-11-20 09:10:53.124955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.342 [2024-11-20 09:10:53.134413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.342 [2024-11-20 09:10:53.134575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.342 [2024-11-20 09:10:53.134594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.342 [2024-11-20 09:10:53.144023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.342 [2024-11-20 09:10:53.144185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7810 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.342 [2024-11-20 09:10:53.144203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.342 [2024-11-20 09:10:53.153586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.342 [2024-11-20 09:10:53.153746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13872 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.342 [2024-11-20 09:10:53.153763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.342 [2024-11-20 09:10:53.163424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.342 [2024-11-20 09:10:53.163585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6619 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.342 [2024-11-20 09:10:53.163602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.342 [2024-11-20 09:10:53.173013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.342 [2024-11-20 09:10:53.173195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.342 [2024-11-20 09:10:53.173213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.342 [2024-11-20 09:10:53.182723] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.342 [2024-11-20 09:10:53.182887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13792 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.342 [2024-11-20 09:10:53.182905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.342 [2024-11-20 09:10:53.192365] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.343 [2024-11-20 09:10:53.192526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.343 [2024-11-20 09:10:53.192547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.343 [2024-11-20 09:10:53.201982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.343 [2024-11-20 09:10:53.202143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15309 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.343 [2024-11-20 09:10:53.202161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.343 [2024-11-20 09:10:53.211572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.343 [2024-11-20 09:10:53.211736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.343 [2024-11-20 09:10:53.211753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.343 [2024-11-20 09:10:53.221191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.343 [2024-11-20 09:10:53.221351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23148 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.343 [2024-11-20 09:10:53.221368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.343 [2024-11-20 09:10:53.230753] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.343 [2024-11-20 09:10:53.230914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8432 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.343 [2024-11-20 09:10:53.230932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.343 [2024-11-20 09:10:53.240379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.343 [2024-11-20 09:10:53.240539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.343 [2024-11-20 09:10:53.240557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.343 [2024-11-20 09:10:53.249966] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.343 [2024-11-20 09:10:53.250129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.343 [2024-11-20 09:10:53.250146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.343 [2024-11-20 09:10:53.259561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.343 [2024-11-20 09:10:53.259722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1145 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.343 [2024-11-20 09:10:53.259740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.343 [2024-11-20 09:10:53.269150] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.343 [2024-11-20 09:10:53.269310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.343 [2024-11-20 09:10:53.269327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.343 [2024-11-20 09:10:53.278771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.343 [2024-11-20 09:10:53.278962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.343 [2024-11-20 09:10:53.278980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.343 [2024-11-20 09:10:53.288503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.343 [2024-11-20 09:10:53.288684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12780 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.343 [2024-11-20 09:10:53.288703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.343 [2024-11-20 09:10:53.298178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.343 [2024-11-20 09:10:53.298359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5004 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.343 [2024-11-20 09:10:53.298377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.343 [2024-11-20 09:10:53.307850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.343 [2024-11-20 09:10:53.308039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.343 [2024-11-20 09:10:53.308057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.343 [2024-11-20 09:10:53.317506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.343 [2024-11-20 09:10:53.317667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.343 [2024-11-20 09:10:53.317684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.343 [2024-11-20 09:10:53.327109] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.343 [2024-11-20 09:10:53.327270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.343 [2024-11-20 09:10:53.327287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.343 [2024-11-20 09:10:53.336700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.343 [2024-11-20 09:10:53.336861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22616 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.343 [2024-11-20 09:10:53.336879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.343 [2024-11-20 09:10:53.346289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.343 [2024-11-20 09:10:53.346449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12943 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.343 [2024-11-20 09:10:53.346467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.343 [2024-11-20 09:10:53.355874] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.343 [2024-11-20 09:10:53.356046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20584 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.343 [2024-11-20 09:10:53.356064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.343 [2024-11-20 09:10:53.365697] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.343 [2024-11-20 09:10:53.365860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.343 [2024-11-20 09:10:53.365878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.343 [2024-11-20 09:10:53.375586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.343 [2024-11-20 09:10:53.375747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.343 [2024-11-20 09:10:53.375765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.603 [2024-11-20 09:10:53.385551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.603 [2024-11-20 09:10:53.385717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.603 [2024-11-20 09:10:53.385737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.603 [2024-11-20 09:10:53.395241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.603 [2024-11-20 09:10:53.395401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.603 [2024-11-20 09:10:53.395420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.603 [2024-11-20 09:10:53.404836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.603 [2024-11-20 09:10:53.405006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:5359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.603 [2024-11-20 09:10:53.405024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.603 [2024-11-20 09:10:53.414668] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.603 [2024-11-20 09:10:53.414832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.603 [2024-11-20 09:10:53.414849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.603 [2024-11-20 09:10:53.424249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.603 [2024-11-20 09:10:53.424410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24270 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.603 [2024-11-20 09:10:53.424428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.603 [2024-11-20 09:10:53.433847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.603 [2024-11-20 09:10:53.434017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.603 [2024-11-20 09:10:53.434036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.603 [2024-11-20 09:10:53.443431] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.603 [2024-11-20 09:10:53.443611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6910 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.603 [2024-11-20 09:10:53.443632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.603 [2024-11-20 09:10:53.453062] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.603 [2024-11-20 09:10:53.453243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:21472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.603 [2024-11-20 09:10:53.453261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.603 [2024-11-20 09:10:53.462732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.603 [2024-11-20 09:10:53.462894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12532 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.603 [2024-11-20 09:10:53.462911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.603 [2024-11-20 09:10:53.472316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.603 [2024-11-20 09:10:53.472476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:13055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.603 [2024-11-20 09:10:53.472493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.603 [2024-11-20 09:10:53.481955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.603 [2024-11-20 09:10:53.482115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.603 [2024-11-20 09:10:53.482134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.603 [2024-11-20 09:10:53.491652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.603 [2024-11-20 09:10:53.491814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2129 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.603 [2024-11-20 09:10:53.491832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.603 [2024-11-20 09:10:53.501245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.603 [2024-11-20 09:10:53.501408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.603 [2024-11-20 09:10:53.501425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.603 [2024-11-20 09:10:53.510844] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.603 [2024-11-20 09:10:53.511012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7873 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.603 [2024-11-20 09:10:53.511030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.603 [2024-11-20 09:10:53.520421] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.603 [2024-11-20 09:10:53.520581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.603 [2024-11-20 09:10:53.520598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.603 [2024-11-20 09:10:53.530025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.603 [2024-11-20 09:10:53.530190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.603 [2024-11-20 09:10:53.530208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.603 [2024-11-20 09:10:53.539604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.603 [2024-11-20 09:10:53.539765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.603 [2024-11-20 09:10:53.539783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.604 [2024-11-20 09:10:53.549199] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.604 [2024-11-20 09:10:53.549361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17650 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.604 [2024-11-20 09:10:53.549379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.604 [2024-11-20 09:10:53.558773] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.604 [2024-11-20 09:10:53.558933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17444 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.604 [2024-11-20 09:10:53.558956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.604 [2024-11-20 09:10:53.568374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.604 [2024-11-20 09:10:53.568555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3621 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.604 [2024-11-20 09:10:53.568573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.604 [2024-11-20 09:10:53.577991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.604 [2024-11-20 09:10:53.578152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.604 [2024-11-20 09:10:53.578170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.604 [2024-11-20 09:10:53.587796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.604 [2024-11-20 09:10:53.587984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:4924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.604 [2024-11-20 09:10:53.588003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.604 [2024-11-20 09:10:53.597458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.604 [2024-11-20 09:10:53.597617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23782 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.604 [2024-11-20 09:10:53.597634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.604 [2024-11-20 09:10:53.607022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.604 [2024-11-20 09:10:53.607184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.604 [2024-11-20 09:10:53.607202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.604 [2024-11-20 09:10:53.616627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.604 [2024-11-20 09:10:53.616788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:20720 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.604 [2024-11-20 09:10:53.616805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.604 [2024-11-20 09:10:53.626198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.604 [2024-11-20 09:10:53.626359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:22906 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.604 [2024-11-20 09:10:53.626375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.604 [2024-11-20 09:10:53.635798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.604 [2024-11-20 09:10:53.635961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.604 [2024-11-20 09:10:53.635979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.863 [2024-11-20 09:10:53.645733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.863 [2024-11-20 09:10:53.645903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:24750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.863 [2024-11-20 09:10:53.645923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.863 [2024-11-20 09:10:53.655374] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.864 [2024-11-20 09:10:53.655534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.864 [2024-11-20 09:10:53.655552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.864 [2024-11-20 09:10:53.665191] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.864 [2024-11-20 09:10:53.665353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18362 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.864 [2024-11-20 09:10:53.665371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.864 [2024-11-20 09:10:53.674776] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.864 [2024-11-20 09:10:53.674937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:12799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.864 [2024-11-20 09:10:53.674960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.864 [2024-11-20 09:10:53.684371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.864 [2024-11-20 09:10:53.684531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8821 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.864 [2024-11-20 09:10:53.684549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.864 [2024-11-20 09:10:53.694054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.864 [2024-11-20 09:10:53.694236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:19078 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.864 [2024-11-20 09:10:53.694259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.864 [2024-11-20 09:10:53.703703] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.864 [2024-11-20 09:10:53.703864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:18485 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.864 [2024-11-20 09:10:53.703882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.864 [2024-11-20 09:10:53.713289] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.864 [2024-11-20 09:10:53.713451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23190 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.864 [2024-11-20 09:10:53.713468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.864 [2024-11-20 09:10:53.722877] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.864 [2024-11-20 09:10:53.723044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.864 [2024-11-20 09:10:53.723062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.864 [2024-11-20 09:10:53.732460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.864 [2024-11-20 09:10:53.732621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:1387 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.864 [2024-11-20 09:10:53.732639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.864 [2024-11-20 09:10:53.742217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.864 [2024-11-20 09:10:53.742400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:6488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.864 [2024-11-20 09:10:53.742418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.864 [2024-11-20 09:10:53.751862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.864 [2024-11-20 09:10:53.752028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10114 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.864 [2024-11-20 09:10:53.752046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.864 [2024-11-20 09:10:53.761460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.864 [2024-11-20 09:10:53.761622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.864 [2024-11-20 09:10:53.761639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.864 [2024-11-20 09:10:53.771053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.864 [2024-11-20 09:10:53.771216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14972 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.864 [2024-11-20 09:10:53.771233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.864 [2024-11-20 09:10:53.780638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.864 [2024-11-20 09:10:53.780799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.864 [2024-11-20 09:10:53.780820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.864 [2024-11-20 09:10:53.790271] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.864 [2024-11-20 09:10:53.790432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:7195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.864 [2024-11-20 09:10:53.790450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.864 [2024-11-20 09:10:53.799846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.864 [2024-11-20 09:10:53.800019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11342 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.864 [2024-11-20 09:10:53.800037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.864 [2024-11-20 09:10:53.809460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.864 [2024-11-20 09:10:53.809621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:2088 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.864 [2024-11-20 09:10:53.809638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.864 [2024-11-20 09:10:53.819058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.864 [2024-11-20 09:10:53.819223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:14386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.864 [2024-11-20 09:10:53.819241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.864 [2024-11-20 09:10:53.828657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.864 [2024-11-20 09:10:53.828819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:11625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.864 [2024-11-20 09:10:53.828837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.864 [2024-11-20 09:10:53.838260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.864 [2024-11-20 09:10:53.838420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:3992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.864 [2024-11-20 09:10:53.838438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.864 [2024-11-20 09:10:53.847848] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.864 [2024-11-20 09:10:53.848018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9614 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.864 [2024-11-20 09:10:53.848037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.864 [2024-11-20 09:10:53.857716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.864 [2024-11-20 09:10:53.857881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.864 [2024-11-20 09:10:53.857899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.864 [2024-11-20 09:10:53.867530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.864 [2024-11-20 09:10:53.867694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:10276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.864 [2024-11-20 09:10:53.867710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.865 [2024-11-20 09:10:53.877132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.865 [2024-11-20 09:10:53.877294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:16156 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.865 [2024-11-20 09:10:53.877312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.865 26725.00 IOPS, 104.39 MiB/s [2024-11-20T08:10:53.906Z] [2024-11-20 09:10:53.886741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40640) with pdu=0x2000166de470 00:26:37.865 [2024-11-20 09:10:53.886922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9343 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:37.865 [2024-11-20 09:10:53.886940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:26:37.865 00:26:37.865 Latency(us) 00:26:37.865 [2024-11-20T08:10:53.906Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:37.865 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:37.865 nvme0n1 : 2.00 26723.07 104.39 0.00 0.00 4781.71 2265.27 12936.24 00:26:37.865 [2024-11-20T08:10:53.906Z] =================================================================================================================== 00:26:37.865 [2024-11-20T08:10:53.906Z] Total : 26723.07 104.39 0.00 0.00 4781.71 2265.27 12936.24 00:26:37.865 { 00:26:37.865 "results": [ 00:26:37.865 { 00:26:37.865 "job": "nvme0n1", 00:26:37.865 "core_mask": "0x2", 00:26:37.865 "workload": "randwrite", 00:26:37.865 "status": "finished", 00:26:37.865 "queue_depth": 128, 00:26:37.865 "io_size": 4096, 00:26:37.865 "runtime": 2.004934, 00:26:37.865 "iops": 26723.074176007787, 00:26:37.865 "mibps": 104.38700850003042, 00:26:37.865 "io_failed": 0, 00:26:37.865 "io_timeout": 0, 00:26:37.865 "avg_latency_us": 4781.709171090666, 00:26:37.865 "min_latency_us": 2265.2660869565216, 00:26:37.865 "max_latency_us": 12936.23652173913 00:26:37.865 } 00:26:37.865 ], 00:26:37.865 "core_count": 1 00:26:37.865 } 00:26:38.124 09:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:38.124 09:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:38.124 09:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:38.125 09:10:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:38.125 | .driver_specific 00:26:38.125 | .nvme_error 00:26:38.125 | .status_code 00:26:38.125 | .command_transient_transport_error' 00:26:38.125 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 210 > 0 )) 00:26:38.125 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2488134 00:26:38.125 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2488134 ']' 00:26:38.125 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2488134 00:26:38.125 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:38.125 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:38.125 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2488134 00:26:38.125 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:38.125 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:38.125 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2488134' 00:26:38.125 killing process with pid 2488134 00:26:38.125 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2488134 00:26:38.125 Received shutdown signal, test time was about 2.000000 seconds 00:26:38.125 00:26:38.125 Latency(us) 00:26:38.125 [2024-11-20T08:10:54.166Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.125 [2024-11-20T08:10:54.166Z] =================================================================================================================== 00:26:38.125 [2024-11-20T08:10:54.166Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:38.125 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2488134 00:26:38.384 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:38.384 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:38.384 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:38.384 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:38.384 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:38.384 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2488827 00:26:38.384 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2488827 /var/tmp/bperf.sock 00:26:38.384 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:38.384 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 2488827 ']' 00:26:38.384 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:38.384 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:38.384 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:38.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:38.384 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:38.384 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:38.384 [2024-11-20 09:10:54.362260] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:26:38.384 [2024-11-20 09:10:54.362310] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2488827 ] 00:26:38.384 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:38.384 Zero copy mechanism will not be used. 00:26:38.643 [2024-11-20 09:10:54.437415] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:38.643 [2024-11-20 09:10:54.479875] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:38.643 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:38.643 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:26:38.643 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:38.643 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:38.901 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:38.901 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:38.901 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:38.901 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:38.902 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:38.902 09:10:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:39.160 nvme0n1 00:26:39.160 09:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:39.160 09:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:39.160 09:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:39.160 09:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:39.160 09:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:39.160 09:10:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:39.419 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:39.419 Zero copy mechanism will not be used. 00:26:39.419 Running I/O for 2 seconds... 00:26:39.419 [2024-11-20 09:10:55.259827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.419 [2024-11-20 09:10:55.259907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.419 [2024-11-20 09:10:55.259934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.419 [2024-11-20 09:10:55.264312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.419 [2024-11-20 09:10:55.264383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.419 [2024-11-20 09:10:55.264406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.419 [2024-11-20 09:10:55.268643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.419 [2024-11-20 09:10:55.268712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.419 [2024-11-20 09:10:55.268732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.419 [2024-11-20 09:10:55.272932] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.419 [2024-11-20 09:10:55.273003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.419 [2024-11-20 09:10:55.273023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.419 [2024-11-20 09:10:55.277219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.419 [2024-11-20 09:10:55.277281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.419 [2024-11-20 09:10:55.277303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.420 [2024-11-20 09:10:55.281609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.420 [2024-11-20 09:10:55.281677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.420 [2024-11-20 09:10:55.281697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.420 [2024-11-20 09:10:55.285922] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.420 [2024-11-20 09:10:55.286004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.420 [2024-11-20 09:10:55.286023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.420 [2024-11-20 09:10:55.290342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.420 [2024-11-20 09:10:55.290423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.420 [2024-11-20 09:10:55.290443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.420 [2024-11-20 09:10:55.294696] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.420 [2024-11-20 09:10:55.294751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.420 [2024-11-20 09:10:55.294770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.420 [2024-11-20 09:10:55.298882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.420 [2024-11-20 09:10:55.298955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.420 [2024-11-20 09:10:55.298973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.420 [2024-11-20 09:10:55.303027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.420 [2024-11-20 09:10:55.303090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.420 [2024-11-20 09:10:55.303109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.420 [2024-11-20 09:10:55.307178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.420 [2024-11-20 09:10:55.307248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.420 [2024-11-20 09:10:55.307267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.420 [2024-11-20 09:10:55.311283] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.420 [2024-11-20 09:10:55.311343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.420 [2024-11-20 09:10:55.311362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.420 [2024-11-20 09:10:55.315405] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.420 [2024-11-20 09:10:55.315464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.420 [2024-11-20 09:10:55.315482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.420 [2024-11-20 09:10:55.319547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.420 [2024-11-20 09:10:55.319604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.420 [2024-11-20 09:10:55.319622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.420 [2024-11-20 09:10:55.323640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.420 [2024-11-20 09:10:55.323708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.420 [2024-11-20 09:10:55.323727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.420 [2024-11-20 09:10:55.327774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.420 [2024-11-20 09:10:55.327846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.420 [2024-11-20 09:10:55.327864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.420 [2024-11-20 09:10:55.332254] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.420 [2024-11-20 09:10:55.332350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.420 [2024-11-20 09:10:55.332368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.420 [2024-11-20 09:10:55.336787] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.420 [2024-11-20 09:10:55.336848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.420 [2024-11-20 09:10:55.336867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.420 [2024-11-20 09:10:55.341009] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.420 [2024-11-20 09:10:55.341080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.420 [2024-11-20 09:10:55.341099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.420 [2024-11-20 09:10:55.345190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.420 [2024-11-20 09:10:55.345258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.420 [2024-11-20 09:10:55.345276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.420 [2024-11-20 09:10:55.349506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.420 [2024-11-20 09:10:55.349565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.420 [2024-11-20 09:10:55.349583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.420 [2024-11-20 09:10:55.353690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.420 [2024-11-20 09:10:55.353754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.420 [2024-11-20 09:10:55.353772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.420 [2024-11-20 09:10:55.357811] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.420 [2024-11-20 09:10:55.357864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.420 [2024-11-20 09:10:55.357883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.420 [2024-11-20 09:10:55.362034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.420 [2024-11-20 09:10:55.362095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.420 [2024-11-20 09:10:55.362113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.420 [2024-11-20 09:10:55.366182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.420 [2024-11-20 09:10:55.366247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.420 [2024-11-20 09:10:55.366265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.420 [2024-11-20 09:10:55.370601] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.420 [2024-11-20 09:10:55.370665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.420 [2024-11-20 09:10:55.370683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.420 [2024-11-20 09:10:55.374750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.420 [2024-11-20 09:10:55.374817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.420 [2024-11-20 09:10:55.374835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.420 [2024-11-20 09:10:55.378911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.420 [2024-11-20 09:10:55.378981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.420 [2024-11-20 09:10:55.379000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.420 [2024-11-20 09:10:55.383036] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.420 [2024-11-20 09:10:55.383100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.420 [2024-11-20 09:10:55.383118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.420 [2024-11-20 09:10:55.387176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.420 [2024-11-20 09:10:55.387247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.420 [2024-11-20 09:10:55.387272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.420 [2024-11-20 09:10:55.391381] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.420 [2024-11-20 09:10:55.391444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.420 [2024-11-20 09:10:55.391463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.421 [2024-11-20 09:10:55.395493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.421 [2024-11-20 09:10:55.395546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.421 [2024-11-20 09:10:55.395565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.421 [2024-11-20 09:10:55.399627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.421 [2024-11-20 09:10:55.399682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.421 [2024-11-20 09:10:55.399700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.421 [2024-11-20 09:10:55.403781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.421 [2024-11-20 09:10:55.403837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.421 [2024-11-20 09:10:55.403856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.421 [2024-11-20 09:10:55.407892] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.421 [2024-11-20 09:10:55.407957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.421 [2024-11-20 09:10:55.407976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.421 [2024-11-20 09:10:55.411993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.421 [2024-11-20 09:10:55.412052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.421 [2024-11-20 09:10:55.412070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.421 [2024-11-20 09:10:55.416118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.421 [2024-11-20 09:10:55.416188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.421 [2024-11-20 09:10:55.416206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.421 [2024-11-20 09:10:55.420449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.421 [2024-11-20 09:10:55.420503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.421 [2024-11-20 09:10:55.420521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.421 [2024-11-20 09:10:55.424545] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.421 [2024-11-20 09:10:55.424613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.421 [2024-11-20 09:10:55.424632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.421 [2024-11-20 09:10:55.428676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.421 [2024-11-20 09:10:55.428729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.421 [2024-11-20 09:10:55.428747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.421 [2024-11-20 09:10:55.432965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.421 [2024-11-20 09:10:55.433053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.421 [2024-11-20 09:10:55.433071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.421 [2024-11-20 09:10:55.437277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.421 [2024-11-20 09:10:55.437332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.421 [2024-11-20 09:10:55.437350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.421 [2024-11-20 09:10:55.441476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.421 [2024-11-20 09:10:55.441540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.421 [2024-11-20 09:10:55.441558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.421 [2024-11-20 09:10:55.445643] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.421 [2024-11-20 09:10:55.445698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.421 [2024-11-20 09:10:55.445716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.421 [2024-11-20 09:10:55.449800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.421 [2024-11-20 09:10:55.449867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.421 [2024-11-20 09:10:55.449885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.421 [2024-11-20 09:10:55.454157] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.421 [2024-11-20 09:10:55.454212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.421 [2024-11-20 09:10:55.454233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.681 [2024-11-20 09:10:55.458581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.681 [2024-11-20 09:10:55.458637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.681 [2024-11-20 09:10:55.458658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.681 [2024-11-20 09:10:55.462846] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.681 [2024-11-20 09:10:55.462905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.681 [2024-11-20 09:10:55.462926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.681 [2024-11-20 09:10:55.467008] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.681 [2024-11-20 09:10:55.467067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.681 [2024-11-20 09:10:55.467087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.681 [2024-11-20 09:10:55.471203] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.681 [2024-11-20 09:10:55.471257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.681 [2024-11-20 09:10:55.471277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.681 [2024-11-20 09:10:55.475332] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.681 [2024-11-20 09:10:55.475403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.681 [2024-11-20 09:10:55.475422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.681 [2024-11-20 09:10:55.479600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.681 [2024-11-20 09:10:55.479656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.681 [2024-11-20 09:10:55.479675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.681 [2024-11-20 09:10:55.483847] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.681 [2024-11-20 09:10:55.483944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.681 [2024-11-20 09:10:55.483969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.681 [2024-11-20 09:10:55.488219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.681 [2024-11-20 09:10:55.488276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.681 [2024-11-20 09:10:55.488295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.681 [2024-11-20 09:10:55.493322] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.681 [2024-11-20 09:10:55.493372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.681 [2024-11-20 09:10:55.493391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.681 [2024-11-20 09:10:55.498565] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.681 [2024-11-20 09:10:55.498638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.682 [2024-11-20 09:10:55.498660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.682 [2024-11-20 09:10:55.503740] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.682 [2024-11-20 09:10:55.503806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.682 [2024-11-20 09:10:55.503824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.682 [2024-11-20 09:10:55.508651] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.682 [2024-11-20 09:10:55.508719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.682 [2024-11-20 09:10:55.508737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.682 [2024-11-20 09:10:55.513687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.682 [2024-11-20 09:10:55.513754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.682 [2024-11-20 09:10:55.513772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.682 [2024-11-20 09:10:55.518765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.682 [2024-11-20 09:10:55.518821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.682 [2024-11-20 09:10:55.518840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.682 [2024-11-20 09:10:55.523744] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.682 [2024-11-20 09:10:55.523799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.682 [2024-11-20 09:10:55.523818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.682 [2024-11-20 09:10:55.528250] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.682 [2024-11-20 09:10:55.528324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.682 [2024-11-20 09:10:55.528343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.682 [2024-11-20 09:10:55.532489] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.682 [2024-11-20 09:10:55.532549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.682 [2024-11-20 09:10:55.532568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.682 [2024-11-20 09:10:55.536683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.682 [2024-11-20 09:10:55.536742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.682 [2024-11-20 09:10:55.536761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.682 [2024-11-20 09:10:55.540815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.682 [2024-11-20 09:10:55.540875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.682 [2024-11-20 09:10:55.540893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.682 [2024-11-20 09:10:55.545007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.682 [2024-11-20 09:10:55.545085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.682 [2024-11-20 09:10:55.545103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.682 [2024-11-20 09:10:55.549217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.682 [2024-11-20 09:10:55.549296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.682 [2024-11-20 09:10:55.549314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.682 [2024-11-20 09:10:55.553341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.682 [2024-11-20 09:10:55.553400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.682 [2024-11-20 09:10:55.553418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.682 [2024-11-20 09:10:55.557479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.682 [2024-11-20 09:10:55.557591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.682 [2024-11-20 09:10:55.557609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.682 [2024-11-20 09:10:55.561641] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.682 [2024-11-20 09:10:55.561699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.682 [2024-11-20 09:10:55.561717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.682 [2024-11-20 09:10:55.565799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.682 [2024-11-20 09:10:55.565854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.682 [2024-11-20 09:10:55.565872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.682 [2024-11-20 09:10:55.570007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.682 [2024-11-20 09:10:55.570077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.682 [2024-11-20 09:10:55.570096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.682 [2024-11-20 09:10:55.574182] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.682 [2024-11-20 09:10:55.574247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.682 [2024-11-20 09:10:55.574266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.682 [2024-11-20 09:10:55.578383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.682 [2024-11-20 09:10:55.578447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.682 [2024-11-20 09:10:55.578466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.682 [2024-11-20 09:10:55.583049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.682 [2024-11-20 09:10:55.583118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.682 [2024-11-20 09:10:55.583137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.682 [2024-11-20 09:10:55.587485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.682 [2024-11-20 09:10:55.587557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.682 [2024-11-20 09:10:55.587576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.682 [2024-11-20 09:10:55.591899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.682 [2024-11-20 09:10:55.591964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.682 [2024-11-20 09:10:55.591982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.682 [2024-11-20 09:10:55.596483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.682 [2024-11-20 09:10:55.596542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.682 [2024-11-20 09:10:55.596560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.682 [2024-11-20 09:10:55.601280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.682 [2024-11-20 09:10:55.601337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.682 [2024-11-20 09:10:55.601355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.682 [2024-11-20 09:10:55.605837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.682 [2024-11-20 09:10:55.605920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.682 [2024-11-20 09:10:55.605938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.682 [2024-11-20 09:10:55.610526] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.682 [2024-11-20 09:10:55.610591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.682 [2024-11-20 09:10:55.610609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.682 [2024-11-20 09:10:55.615231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.682 [2024-11-20 09:10:55.615329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.682 [2024-11-20 09:10:55.615350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.682 [2024-11-20 09:10:55.620088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.682 [2024-11-20 09:10:55.620158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.683 [2024-11-20 09:10:55.620177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.683 [2024-11-20 09:10:55.624810] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.683 [2024-11-20 09:10:55.624872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.683 [2024-11-20 09:10:55.624891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.683 [2024-11-20 09:10:55.630116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.683 [2024-11-20 09:10:55.630171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.683 [2024-11-20 09:10:55.630189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.683 [2024-11-20 09:10:55.635441] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.683 [2024-11-20 09:10:55.635573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.683 [2024-11-20 09:10:55.635591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.683 [2024-11-20 09:10:55.640495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.683 [2024-11-20 09:10:55.640553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.683 [2024-11-20 09:10:55.640572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.683 [2024-11-20 09:10:55.645426] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.683 [2024-11-20 09:10:55.645485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.683 [2024-11-20 09:10:55.645504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.683 [2024-11-20 09:10:55.650108] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.683 [2024-11-20 09:10:55.650244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.683 [2024-11-20 09:10:55.650262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.683 [2024-11-20 09:10:55.655537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.683 [2024-11-20 09:10:55.655604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.683 [2024-11-20 09:10:55.655623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.683 [2024-11-20 09:10:55.660701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.683 [2024-11-20 09:10:55.660759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.683 [2024-11-20 09:10:55.660781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.683 [2024-11-20 09:10:55.665404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.683 [2024-11-20 09:10:55.665477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.683 [2024-11-20 09:10:55.665495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.683 [2024-11-20 09:10:55.670183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.683 [2024-11-20 09:10:55.670257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.683 [2024-11-20 09:10:55.670276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.683 [2024-11-20 09:10:55.675049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.683 [2024-11-20 09:10:55.675109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.683 [2024-11-20 09:10:55.675127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.683 [2024-11-20 09:10:55.679709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.683 [2024-11-20 09:10:55.679766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.683 [2024-11-20 09:10:55.679785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.683 [2024-11-20 09:10:55.684398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.683 [2024-11-20 09:10:55.684450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.683 [2024-11-20 09:10:55.684468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.683 [2024-11-20 09:10:55.689720] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.683 [2024-11-20 09:10:55.689823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.683 [2024-11-20 09:10:55.689842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.683 [2024-11-20 09:10:55.694913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.683 [2024-11-20 09:10:55.695021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.683 [2024-11-20 09:10:55.695039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.683 [2024-11-20 09:10:55.701050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.683 [2024-11-20 09:10:55.701122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.683 [2024-11-20 09:10:55.701141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.683 [2024-11-20 09:10:55.706813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.683 [2024-11-20 09:10:55.706916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.683 [2024-11-20 09:10:55.706935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.683 [2024-11-20 09:10:55.711970] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.683 [2024-11-20 09:10:55.712036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.683 [2024-11-20 09:10:55.712054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.683 [2024-11-20 09:10:55.717112] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.683 [2024-11-20 09:10:55.717184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.683 [2024-11-20 09:10:55.717205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.943 [2024-11-20 09:10:55.722301] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.943 [2024-11-20 09:10:55.722386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.943 [2024-11-20 09:10:55.722406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.943 [2024-11-20 09:10:55.727234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.943 [2024-11-20 09:10:55.727334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.944 [2024-11-20 09:10:55.727354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.944 [2024-11-20 09:10:55.731780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.944 [2024-11-20 09:10:55.731848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.944 [2024-11-20 09:10:55.731867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.944 [2024-11-20 09:10:55.736096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.944 [2024-11-20 09:10:55.736167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.944 [2024-11-20 09:10:55.736186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.944 [2024-11-20 09:10:55.740554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.944 [2024-11-20 09:10:55.740650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.944 [2024-11-20 09:10:55.740668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.944 [2024-11-20 09:10:55.745184] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.944 [2024-11-20 09:10:55.745266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.944 [2024-11-20 09:10:55.745288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.944 [2024-11-20 09:10:55.749937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.944 [2024-11-20 09:10:55.750022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.944 [2024-11-20 09:10:55.750040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.944 [2024-11-20 09:10:55.754543] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.944 [2024-11-20 09:10:55.754597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.944 [2024-11-20 09:10:55.754615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.944 [2024-11-20 09:10:55.759031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.944 [2024-11-20 09:10:55.759133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.944 [2024-11-20 09:10:55.759151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.944 [2024-11-20 09:10:55.763483] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.944 [2024-11-20 09:10:55.763592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.944 [2024-11-20 09:10:55.763610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.944 [2024-11-20 09:10:55.767791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.944 [2024-11-20 09:10:55.767857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.944 [2024-11-20 09:10:55.767876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.944 [2024-11-20 09:10:55.772177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.944 [2024-11-20 09:10:55.772235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.944 [2024-11-20 09:10:55.772253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.944 [2024-11-20 09:10:55.776554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.944 [2024-11-20 09:10:55.776617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.944 [2024-11-20 09:10:55.776637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.944 [2024-11-20 09:10:55.780930] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.944 [2024-11-20 09:10:55.780992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.944 [2024-11-20 09:10:55.781010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.944 [2024-11-20 09:10:55.785231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.944 [2024-11-20 09:10:55.785293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.944 [2024-11-20 09:10:55.785315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.944 [2024-11-20 09:10:55.789667] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.944 [2024-11-20 09:10:55.789758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.944 [2024-11-20 09:10:55.789778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.944 [2024-11-20 09:10:55.794198] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.944 [2024-11-20 09:10:55.794261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.944 [2024-11-20 09:10:55.794280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.944 [2024-11-20 09:10:55.799056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.944 [2024-11-20 09:10:55.799150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.944 [2024-11-20 09:10:55.799168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.944 [2024-11-20 09:10:55.803827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.944 [2024-11-20 09:10:55.803881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.944 [2024-11-20 09:10:55.803899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.944 [2024-11-20 09:10:55.808334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.944 [2024-11-20 09:10:55.808440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.944 [2024-11-20 09:10:55.808458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.944 [2024-11-20 09:10:55.812887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.944 [2024-11-20 09:10:55.813001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.944 [2024-11-20 09:10:55.813018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.944 [2024-11-20 09:10:55.817275] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.944 [2024-11-20 09:10:55.817351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.944 [2024-11-20 09:10:55.817369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.944 [2024-11-20 09:10:55.821852] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.944 [2024-11-20 09:10:55.821916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.944 [2024-11-20 09:10:55.821934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.944 [2024-11-20 09:10:55.826323] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.944 [2024-11-20 09:10:55.826409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.945 [2024-11-20 09:10:55.826428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.945 [2024-11-20 09:10:55.830683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.945 [2024-11-20 09:10:55.830739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.945 [2024-11-20 09:10:55.830757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.945 [2024-11-20 09:10:55.834902] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.945 [2024-11-20 09:10:55.834968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.945 [2024-11-20 09:10:55.834987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.945 [2024-11-20 09:10:55.839508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.945 [2024-11-20 09:10:55.839566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.945 [2024-11-20 09:10:55.839584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.945 [2024-11-20 09:10:55.843865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.945 [2024-11-20 09:10:55.843919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.945 [2024-11-20 09:10:55.843938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.945 [2024-11-20 09:10:55.848116] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.945 [2024-11-20 09:10:55.848183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.945 [2024-11-20 09:10:55.848200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.945 [2024-11-20 09:10:55.852331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.945 [2024-11-20 09:10:55.852395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.945 [2024-11-20 09:10:55.852413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.945 [2024-11-20 09:10:55.856551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.945 [2024-11-20 09:10:55.856605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.945 [2024-11-20 09:10:55.856623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.945 [2024-11-20 09:10:55.860770] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.945 [2024-11-20 09:10:55.860826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.945 [2024-11-20 09:10:55.860847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.945 [2024-11-20 09:10:55.864981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.945 [2024-11-20 09:10:55.865039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.945 [2024-11-20 09:10:55.865057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.945 [2024-11-20 09:10:55.869378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.945 [2024-11-20 09:10:55.869448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.945 [2024-11-20 09:10:55.869465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.945 [2024-11-20 09:10:55.874016] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.945 [2024-11-20 09:10:55.874073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.945 [2024-11-20 09:10:55.874091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.945 [2024-11-20 09:10:55.879032] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.945 [2024-11-20 09:10:55.879108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.945 [2024-11-20 09:10:55.879126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.945 [2024-11-20 09:10:55.884488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.945 [2024-11-20 09:10:55.884550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.945 [2024-11-20 09:10:55.884568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.945 [2024-11-20 09:10:55.889561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.945 [2024-11-20 09:10:55.889645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.945 [2024-11-20 09:10:55.889681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.945 [2024-11-20 09:10:55.894342] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.945 [2024-11-20 09:10:55.894413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.945 [2024-11-20 09:10:55.894431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.945 [2024-11-20 09:10:55.899084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.945 [2024-11-20 09:10:55.899185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.945 [2024-11-20 09:10:55.899203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.945 [2024-11-20 09:10:55.903728] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.945 [2024-11-20 09:10:55.903786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.945 [2024-11-20 09:10:55.903808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.945 [2024-11-20 09:10:55.908327] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.945 [2024-11-20 09:10:55.908382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.945 [2024-11-20 09:10:55.908401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.945 [2024-11-20 09:10:55.912965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.945 [2024-11-20 09:10:55.913036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.945 [2024-11-20 09:10:55.913054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.945 [2024-11-20 09:10:55.917472] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.945 [2024-11-20 09:10:55.917531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.945 [2024-11-20 09:10:55.917549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.945 [2024-11-20 09:10:55.922177] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.945 [2024-11-20 09:10:55.922238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.945 [2024-11-20 09:10:55.922256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.945 [2024-11-20 09:10:55.926774] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.945 [2024-11-20 09:10:55.926873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.945 [2024-11-20 09:10:55.926891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.945 [2024-11-20 09:10:55.930809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.946 [2024-11-20 09:10:55.931052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.946 [2024-11-20 09:10:55.931071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.946 [2024-11-20 09:10:55.935054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.946 [2024-11-20 09:10:55.935313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.946 [2024-11-20 09:10:55.935332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.946 [2024-11-20 09:10:55.939245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.946 [2024-11-20 09:10:55.939496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.946 [2024-11-20 09:10:55.939514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.946 [2024-11-20 09:10:55.943567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.946 [2024-11-20 09:10:55.943817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.946 [2024-11-20 09:10:55.943836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.946 [2024-11-20 09:10:55.948152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.946 [2024-11-20 09:10:55.948414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.946 [2024-11-20 09:10:55.948433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.946 [2024-11-20 09:10:55.953153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.946 [2024-11-20 09:10:55.953415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.946 [2024-11-20 09:10:55.953434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.946 [2024-11-20 09:10:55.957959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.946 [2024-11-20 09:10:55.958211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.946 [2024-11-20 09:10:55.958230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.946 [2024-11-20 09:10:55.962430] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.946 [2024-11-20 09:10:55.962686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.946 [2024-11-20 09:10:55.962705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:39.946 [2024-11-20 09:10:55.966825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.946 [2024-11-20 09:10:55.967130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.946 [2024-11-20 09:10:55.967149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:39.946 [2024-11-20 09:10:55.971341] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.946 [2024-11-20 09:10:55.971592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.946 [2024-11-20 09:10:55.971611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:39.946 [2024-11-20 09:10:55.975731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.946 [2024-11-20 09:10:55.976000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.946 [2024-11-20 09:10:55.976019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:39.946 [2024-11-20 09:10:55.980262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:39.946 [2024-11-20 09:10:55.980519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:39.946 [2024-11-20 09:10:55.980544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.207 [2024-11-20 09:10:55.984727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.207 [2024-11-20 09:10:55.984994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.207 [2024-11-20 09:10:55.985016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.207 [2024-11-20 09:10:55.989255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.207 [2024-11-20 09:10:55.989510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.207 [2024-11-20 09:10:55.989531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.207 [2024-11-20 09:10:55.993761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.207 [2024-11-20 09:10:55.994046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.207 [2024-11-20 09:10:55.994065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.207 [2024-11-20 09:10:55.998058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.207 [2024-11-20 09:10:55.998329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.207 [2024-11-20 09:10:55.998348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.207 [2024-11-20 09:10:56.002422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.207 [2024-11-20 09:10:56.002693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.207 [2024-11-20 09:10:56.002712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.207 [2024-11-20 09:10:56.006733] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.207 [2024-11-20 09:10:56.006962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.207 [2024-11-20 09:10:56.006981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.207 [2024-11-20 09:10:56.011053] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.207 [2024-11-20 09:10:56.011288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.207 [2024-11-20 09:10:56.011308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.207 [2024-11-20 09:10:56.015233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.207 [2024-11-20 09:10:56.015486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.207 [2024-11-20 09:10:56.015506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.207 [2024-11-20 09:10:56.019253] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.207 [2024-11-20 09:10:56.019498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.207 [2024-11-20 09:10:56.019521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.207 [2024-11-20 09:10:56.023387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.207 [2024-11-20 09:10:56.023646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.207 [2024-11-20 09:10:56.023666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.207 [2024-11-20 09:10:56.027581] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.207 [2024-11-20 09:10:56.027843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.207 [2024-11-20 09:10:56.027863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.207 [2024-11-20 09:10:56.031809] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.207 [2024-11-20 09:10:56.032072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.207 [2024-11-20 09:10:56.032093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.207 [2024-11-20 09:10:56.035889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.207 [2024-11-20 09:10:56.036145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.207 [2024-11-20 09:10:56.036165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.207 [2024-11-20 09:10:56.039995] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.207 [2024-11-20 09:10:56.040244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.207 [2024-11-20 09:10:56.040263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.207 [2024-11-20 09:10:56.044006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.207 [2024-11-20 09:10:56.044259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.207 [2024-11-20 09:10:56.044277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.207 [2024-11-20 09:10:56.048012] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.207 [2024-11-20 09:10:56.048261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.207 [2024-11-20 09:10:56.048280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.207 [2024-11-20 09:10:56.052025] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.207 [2024-11-20 09:10:56.052278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.207 [2024-11-20 09:10:56.052298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.207 [2024-11-20 09:10:56.056287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.207 [2024-11-20 09:10:56.056537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.207 [2024-11-20 09:10:56.056556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.207 [2024-11-20 09:10:56.060453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.207 [2024-11-20 09:10:56.060710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.207 [2024-11-20 09:10:56.060729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.207 [2024-11-20 09:10:56.064889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.207 [2024-11-20 09:10:56.065151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.207 [2024-11-20 09:10:56.065170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.207 [2024-11-20 09:10:56.069617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.207 [2024-11-20 09:10:56.069864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.207 [2024-11-20 09:10:56.069882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.207 [2024-11-20 09:10:56.073654] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.207 [2024-11-20 09:10:56.073904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.207 [2024-11-20 09:10:56.073924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.207 [2024-11-20 09:10:56.077638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.208 [2024-11-20 09:10:56.077907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.208 [2024-11-20 09:10:56.077926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.208 [2024-11-20 09:10:56.081662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.208 [2024-11-20 09:10:56.081915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.208 [2024-11-20 09:10:56.081934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.208 [2024-11-20 09:10:56.085638] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.208 [2024-11-20 09:10:56.085890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.208 [2024-11-20 09:10:56.085909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.208 [2024-11-20 09:10:56.089946] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.208 [2024-11-20 09:10:56.090224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.208 [2024-11-20 09:10:56.090251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.208 [2024-11-20 09:10:56.095484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.208 [2024-11-20 09:10:56.095850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.208 [2024-11-20 09:10:56.095869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.208 [2024-11-20 09:10:56.101134] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.208 [2024-11-20 09:10:56.101383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.208 [2024-11-20 09:10:56.101403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.208 [2024-11-20 09:10:56.105561] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.208 [2024-11-20 09:10:56.105801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.208 [2024-11-20 09:10:56.105820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.208 [2024-11-20 09:10:56.110067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.208 [2024-11-20 09:10:56.110336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.208 [2024-11-20 09:10:56.110355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.208 [2024-11-20 09:10:56.114590] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.208 [2024-11-20 09:10:56.114860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.208 [2024-11-20 09:10:56.114880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.208 [2024-11-20 09:10:56.119158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.208 [2024-11-20 09:10:56.119420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.208 [2024-11-20 09:10:56.119439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.208 [2024-11-20 09:10:56.123743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.208 [2024-11-20 09:10:56.123996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.208 [2024-11-20 09:10:56.124015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.208 [2024-11-20 09:10:56.128282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.208 [2024-11-20 09:10:56.128531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.208 [2024-11-20 09:10:56.128551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.208 [2024-11-20 09:10:56.132845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.208 [2024-11-20 09:10:56.133125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.208 [2024-11-20 09:10:56.133149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.208 [2024-11-20 09:10:56.137660] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.208 [2024-11-20 09:10:56.137913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.208 [2024-11-20 09:10:56.137932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.208 [2024-11-20 09:10:56.142652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.208 [2024-11-20 09:10:56.142917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.208 [2024-11-20 09:10:56.142936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.208 [2024-11-20 09:10:56.147262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.208 [2024-11-20 09:10:56.147529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.208 [2024-11-20 09:10:56.147547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.208 [2024-11-20 09:10:56.152501] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.208 [2024-11-20 09:10:56.152748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.208 [2024-11-20 09:10:56.152767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.208 [2024-11-20 09:10:56.157416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.208 [2024-11-20 09:10:56.157668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.208 [2024-11-20 09:10:56.157688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.208 [2024-11-20 09:10:56.162670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.208 [2024-11-20 09:10:56.162926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.208 [2024-11-20 09:10:56.162945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.208 [2024-11-20 09:10:56.167560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.208 [2024-11-20 09:10:56.167813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.208 [2024-11-20 09:10:56.167832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.208 [2024-11-20 09:10:56.172771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.208 [2024-11-20 09:10:56.173071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.208 [2024-11-20 09:10:56.173090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.208 [2024-11-20 09:10:56.178473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.208 [2024-11-20 09:10:56.178705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.208 [2024-11-20 09:10:56.178724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.208 [2024-11-20 09:10:56.183325] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.208 [2024-11-20 09:10:56.183577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.208 [2024-11-20 09:10:56.183596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.208 [2024-11-20 09:10:56.188642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.208 [2024-11-20 09:10:56.188881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.208 [2024-11-20 09:10:56.188901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.208 [2024-11-20 09:10:56.193577] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.208 [2024-11-20 09:10:56.193816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.208 [2024-11-20 09:10:56.193834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.208 [2024-11-20 09:10:56.198619] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.208 [2024-11-20 09:10:56.198868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.208 [2024-11-20 09:10:56.198888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.208 [2024-11-20 09:10:56.203633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.208 [2024-11-20 09:10:56.203880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.208 [2024-11-20 09:10:56.203899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.208 [2024-11-20 09:10:56.208870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.209 [2024-11-20 09:10:56.209127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.209 [2024-11-20 09:10:56.209147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.209 [2024-11-20 09:10:56.213702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.209 [2024-11-20 09:10:56.213960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.209 [2024-11-20 09:10:56.213980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.209 [2024-11-20 09:10:56.218166] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.209 [2024-11-20 09:10:56.218418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.209 [2024-11-20 09:10:56.218442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.209 [2024-11-20 09:10:56.222579] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.209 [2024-11-20 09:10:56.222825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.209 [2024-11-20 09:10:56.222844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.209 [2024-11-20 09:10:56.226712] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.209 [2024-11-20 09:10:56.226964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.209 [2024-11-20 09:10:56.226983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.209 [2024-11-20 09:10:56.230727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.209 [2024-11-20 09:10:56.230974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.209 [2024-11-20 09:10:56.230994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.209 [2024-11-20 09:10:56.234741] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.209 [2024-11-20 09:10:56.234996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.209 [2024-11-20 09:10:56.235016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.209 [2024-11-20 09:10:56.238746] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.209 [2024-11-20 09:10:56.239004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.209 [2024-11-20 09:10:56.239023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.209 [2024-11-20 09:10:56.242913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.209 [2024-11-20 09:10:56.243180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.209 [2024-11-20 09:10:56.243202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.469 [2024-11-20 09:10:56.247023] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.469 [2024-11-20 09:10:56.247275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.469 [2024-11-20 09:10:56.247297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.469 [2024-11-20 09:10:56.251066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.469 [2024-11-20 09:10:56.251325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.469 [2024-11-20 09:10:56.251346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.469 [2024-11-20 09:10:56.255079] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.469 [2024-11-20 09:10:56.255341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.469 [2024-11-20 09:10:56.255365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.469 [2024-11-20 09:10:56.259046] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.469 [2024-11-20 09:10:56.259284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.469 [2024-11-20 09:10:56.259304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.469 6885.00 IOPS, 860.62 MiB/s [2024-11-20T08:10:56.510Z] [2024-11-20 09:10:56.264300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.469 [2024-11-20 09:10:56.264569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.469 [2024-11-20 09:10:56.264587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.469 [2024-11-20 09:10:56.268290] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.469 [2024-11-20 09:10:56.268545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.470 [2024-11-20 09:10:56.268564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.470 [2024-11-20 09:10:56.272280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.470 [2024-11-20 09:10:56.272538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.470 [2024-11-20 09:10:56.272557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.470 [2024-11-20 09:10:56.276307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.470 [2024-11-20 09:10:56.276562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.470 [2024-11-20 09:10:56.276582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.470 [2024-11-20 09:10:56.280386] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.470 [2024-11-20 09:10:56.280639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.470 [2024-11-20 09:10:56.280659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.470 [2024-11-20 09:10:56.284449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.470 [2024-11-20 09:10:56.284713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.470 [2024-11-20 09:10:56.284733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.470 [2024-11-20 09:10:56.288492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.470 [2024-11-20 09:10:56.288755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.470 [2024-11-20 09:10:56.288775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.470 [2024-11-20 09:10:56.292820] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.470 [2024-11-20 09:10:56.293078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.470 [2024-11-20 09:10:56.293098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.470 [2024-11-20 09:10:56.296900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.470 [2024-11-20 09:10:56.297158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.470 [2024-11-20 09:10:56.297177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.470 [2024-11-20 09:10:56.300863] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.470 [2024-11-20 09:10:56.301108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.470 [2024-11-20 09:10:56.301128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.470 [2024-11-20 09:10:56.304785] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.470 [2024-11-20 09:10:56.305046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.470 [2024-11-20 09:10:56.305065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.470 [2024-11-20 09:10:56.308722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.470 [2024-11-20 09:10:56.308983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.470 [2024-11-20 09:10:56.309003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.470 [2024-11-20 09:10:56.312648] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.470 [2024-11-20 09:10:56.312899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.470 [2024-11-20 09:10:56.312919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.470 [2024-11-20 09:10:56.316593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.470 [2024-11-20 09:10:56.316833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.470 [2024-11-20 09:10:56.316852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.470 [2024-11-20 09:10:56.320527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.470 [2024-11-20 09:10:56.320779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.470 [2024-11-20 09:10:56.320799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.470 [2024-11-20 09:10:56.324473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.470 [2024-11-20 09:10:56.324718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.470 [2024-11-20 09:10:56.324741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.470 [2024-11-20 09:10:56.328552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.470 [2024-11-20 09:10:56.328805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.470 [2024-11-20 09:10:56.328825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.470 [2024-11-20 09:10:56.333018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.470 [2024-11-20 09:10:56.333267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.470 [2024-11-20 09:10:56.333286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.470 [2024-11-20 09:10:56.338054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.470 [2024-11-20 09:10:56.338299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.470 [2024-11-20 09:10:56.338318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.470 [2024-11-20 09:10:56.342627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.470 [2024-11-20 09:10:56.342879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.470 [2024-11-20 09:10:56.342898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.470 [2024-11-20 09:10:56.346904] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.470 [2024-11-20 09:10:56.347181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.470 [2024-11-20 09:10:56.347201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.470 [2024-11-20 09:10:56.351118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.470 [2024-11-20 09:10:56.351364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.470 [2024-11-20 09:10:56.351384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.470 [2024-11-20 09:10:56.355451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.470 [2024-11-20 09:10:56.355988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.470 [2024-11-20 09:10:56.356007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.470 [2024-11-20 09:10:56.359915] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.470 [2024-11-20 09:10:56.360169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.470 [2024-11-20 09:10:56.360188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.470 [2024-11-20 09:10:56.363980] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.470 [2024-11-20 09:10:56.364240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.470 [2024-11-20 09:10:56.364260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.470 [2024-11-20 09:10:56.368113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.470 [2024-11-20 09:10:56.368369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.470 [2024-11-20 09:10:56.368388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.470 [2024-11-20 09:10:56.372222] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.470 [2024-11-20 09:10:56.372469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.470 [2024-11-20 09:10:56.372489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.470 [2024-11-20 09:10:56.376298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.470 [2024-11-20 09:10:56.376552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.470 [2024-11-20 09:10:56.376572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.470 [2024-11-20 09:10:56.380383] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.470 [2024-11-20 09:10:56.380645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.471 [2024-11-20 09:10:56.380665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.471 [2024-11-20 09:10:56.384321] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.471 [2024-11-20 09:10:56.384544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.471 [2024-11-20 09:10:56.384563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.471 [2024-11-20 09:10:56.388213] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.471 [2024-11-20 09:10:56.388442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.471 [2024-11-20 09:10:56.388462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.471 [2024-11-20 09:10:56.391992] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.471 [2024-11-20 09:10:56.392199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.471 [2024-11-20 09:10:56.392219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.471 [2024-11-20 09:10:56.395719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.471 [2024-11-20 09:10:56.395923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.471 [2024-11-20 09:10:56.395942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.471 [2024-11-20 09:10:56.399479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.471 [2024-11-20 09:10:56.399707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.471 [2024-11-20 09:10:56.399728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.471 [2024-11-20 09:10:56.403178] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.471 [2024-11-20 09:10:56.403403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.471 [2024-11-20 09:10:56.403422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.471 [2024-11-20 09:10:56.406845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.471 [2024-11-20 09:10:56.407062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.471 [2024-11-20 09:10:56.407082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.471 [2024-11-20 09:10:56.410530] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.471 [2024-11-20 09:10:56.410729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.471 [2024-11-20 09:10:56.410748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.471 [2024-11-20 09:10:56.414155] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.471 [2024-11-20 09:10:56.414350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.471 [2024-11-20 09:10:56.414369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.471 [2024-11-20 09:10:56.417782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.471 [2024-11-20 09:10:56.417987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.471 [2024-11-20 09:10:56.418004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.471 [2024-11-20 09:10:56.421394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.471 [2024-11-20 09:10:56.421587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.471 [2024-11-20 09:10:56.421605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.471 [2024-11-20 09:10:56.425033] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.471 [2024-11-20 09:10:56.425243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.471 [2024-11-20 09:10:56.425262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.471 [2024-11-20 09:10:56.428670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.471 [2024-11-20 09:10:56.428866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.471 [2024-11-20 09:10:56.428888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.471 [2024-11-20 09:10:56.432293] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.471 [2024-11-20 09:10:56.432481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.471 [2024-11-20 09:10:56.432498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.471 [2024-11-20 09:10:56.435899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.471 [2024-11-20 09:10:56.436096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.471 [2024-11-20 09:10:56.436114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.471 [2024-11-20 09:10:56.439481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.471 [2024-11-20 09:10:56.439680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.471 [2024-11-20 09:10:56.439699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.471 [2024-11-20 09:10:56.443141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.471 [2024-11-20 09:10:56.443321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.471 [2024-11-20 09:10:56.443339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.471 [2024-11-20 09:10:56.446917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.471 [2024-11-20 09:10:56.447120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.471 [2024-11-20 09:10:56.447140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.471 [2024-11-20 09:10:56.450690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.471 [2024-11-20 09:10:56.450895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.471 [2024-11-20 09:10:56.450914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.471 [2024-11-20 09:10:56.454481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.471 [2024-11-20 09:10:56.454698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.471 [2024-11-20 09:10:56.454717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.471 [2024-11-20 09:10:56.458343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.471 [2024-11-20 09:10:56.458541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.471 [2024-11-20 09:10:56.458561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.471 [2024-11-20 09:10:56.462416] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.471 [2024-11-20 09:10:56.462649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.471 [2024-11-20 09:10:56.462668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.471 [2024-11-20 09:10:56.467475] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.471 [2024-11-20 09:10:56.467719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.471 [2024-11-20 09:10:56.467739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.471 [2024-11-20 09:10:56.473066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.471 [2024-11-20 09:10:56.473314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.471 [2024-11-20 09:10:56.473333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.471 [2024-11-20 09:10:56.478881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.471 [2024-11-20 09:10:56.479197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.471 [2024-11-20 09:10:56.479217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.471 [2024-11-20 09:10:56.484960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.471 [2024-11-20 09:10:56.485208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.471 [2024-11-20 09:10:56.485227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.471 [2024-11-20 09:10:56.491118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.472 [2024-11-20 09:10:56.491301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.472 [2024-11-20 09:10:56.491320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.472 [2024-11-20 09:10:56.497128] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.472 [2024-11-20 09:10:56.497362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.472 [2024-11-20 09:10:56.497381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.472 [2024-11-20 09:10:56.503297] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.472 [2024-11-20 09:10:56.503577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.472 [2024-11-20 09:10:56.503599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.732 [2024-11-20 09:10:56.509812] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.732 [2024-11-20 09:10:56.510093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.732 [2024-11-20 09:10:56.510114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.732 [2024-11-20 09:10:56.516457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.732 [2024-11-20 09:10:56.516600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.732 [2024-11-20 09:10:56.516619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.732 [2024-11-20 09:10:56.523379] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.732 [2024-11-20 09:10:56.523581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.732 [2024-11-20 09:10:56.523600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.732 [2024-11-20 09:10:56.530230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.732 [2024-11-20 09:10:56.530481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.732 [2024-11-20 09:10:56.530502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.732 [2024-11-20 09:10:56.536536] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.732 [2024-11-20 09:10:56.536717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.732 [2024-11-20 09:10:56.536735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.732 [2024-11-20 09:10:56.543285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.732 [2024-11-20 09:10:56.543444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.732 [2024-11-20 09:10:56.543463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.732 [2024-11-20 09:10:56.549743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.732 [2024-11-20 09:10:56.549938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.732 [2024-11-20 09:10:56.549963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.732 [2024-11-20 09:10:56.556605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.732 [2024-11-20 09:10:56.556752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.732 [2024-11-20 09:10:56.556769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.732 [2024-11-20 09:10:56.561912] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.732 [2024-11-20 09:10:56.562072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.732 [2024-11-20 09:10:56.562090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.732 [2024-11-20 09:10:56.566647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.732 [2024-11-20 09:10:56.566807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.732 [2024-11-20 09:10:56.566828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.732 [2024-11-20 09:10:56.570962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.732 [2024-11-20 09:10:56.571153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.732 [2024-11-20 09:10:56.571171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.732 [2024-11-20 09:10:56.574807] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.732 [2024-11-20 09:10:56.574985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.732 [2024-11-20 09:10:56.575002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.733 [2024-11-20 09:10:56.578551] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.733 [2024-11-20 09:10:56.578721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.733 [2024-11-20 09:10:56.578740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.733 [2024-11-20 09:10:56.582255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.733 [2024-11-20 09:10:56.582435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.733 [2024-11-20 09:10:56.582453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.733 [2024-11-20 09:10:56.586170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.733 [2024-11-20 09:10:56.586345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.733 [2024-11-20 09:10:56.586363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.733 [2024-11-20 09:10:56.589910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.733 [2024-11-20 09:10:56.590083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.733 [2024-11-20 09:10:56.590102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.733 [2024-11-20 09:10:56.593671] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.733 [2024-11-20 09:10:56.593851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.733 [2024-11-20 09:10:56.593869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.733 [2024-11-20 09:10:56.597408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.733 [2024-11-20 09:10:56.597601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.733 [2024-11-20 09:10:56.597625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.733 [2024-11-20 09:10:56.601096] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.733 [2024-11-20 09:10:56.601277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.733 [2024-11-20 09:10:56.601295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.733 [2024-11-20 09:10:56.604804] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.733 [2024-11-20 09:10:56.604985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.733 [2024-11-20 09:10:56.605002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.733 [2024-11-20 09:10:56.608563] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.733 [2024-11-20 09:10:56.608730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.733 [2024-11-20 09:10:56.608748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.733 [2024-11-20 09:10:56.612236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.733 [2024-11-20 09:10:56.612411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.733 [2024-11-20 09:10:56.612429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.733 [2024-11-20 09:10:56.615891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.733 [2024-11-20 09:10:56.616072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.733 [2024-11-20 09:10:56.616090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.733 [2024-11-20 09:10:56.619558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.733 [2024-11-20 09:10:56.619731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.733 [2024-11-20 09:10:56.619749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.733 [2024-11-20 09:10:56.623248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.733 [2024-11-20 09:10:56.623430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.733 [2024-11-20 09:10:56.623447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.733 [2024-11-20 09:10:56.626875] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.733 [2024-11-20 09:10:56.627062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.733 [2024-11-20 09:10:56.627080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.733 [2024-11-20 09:10:56.630552] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.733 [2024-11-20 09:10:56.630714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.733 [2024-11-20 09:10:56.630733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.733 [2024-11-20 09:10:56.634245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.733 [2024-11-20 09:10:56.634426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.733 [2024-11-20 09:10:56.634444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.733 [2024-11-20 09:10:56.637856] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.733 [2024-11-20 09:10:56.638038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.733 [2024-11-20 09:10:56.638056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.733 [2024-11-20 09:10:56.641481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.733 [2024-11-20 09:10:56.641651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.733 [2024-11-20 09:10:56.641669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.733 [2024-11-20 09:10:56.645076] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.733 [2024-11-20 09:10:56.645256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.733 [2024-11-20 09:10:56.645273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.733 [2024-11-20 09:10:56.648682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.733 [2024-11-20 09:10:56.648860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.733 [2024-11-20 09:10:56.648878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.733 [2024-11-20 09:10:56.652287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.733 [2024-11-20 09:10:56.652455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.733 [2024-11-20 09:10:56.652472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.733 [2024-11-20 09:10:56.655965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.733 [2024-11-20 09:10:56.656140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.733 [2024-11-20 09:10:56.656158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.733 [2024-11-20 09:10:56.659608] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.733 [2024-11-20 09:10:56.659797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.733 [2024-11-20 09:10:56.659814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.733 [2024-11-20 09:10:56.663407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.733 [2024-11-20 09:10:56.663581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.733 [2024-11-20 09:10:56.663602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.733 [2024-11-20 09:10:56.667850] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.733 [2024-11-20 09:10:56.668001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.733 [2024-11-20 09:10:56.668018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.733 [2024-11-20 09:10:56.672564] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.733 [2024-11-20 09:10:56.672737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.733 [2024-11-20 09:10:56.672755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.733 [2024-11-20 09:10:56.677099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.733 [2024-11-20 09:10:56.677275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.733 [2024-11-20 09:10:56.677293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.734 [2024-11-20 09:10:56.682125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.734 [2024-11-20 09:10:56.682288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.734 [2024-11-20 09:10:56.682306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.734 [2024-11-20 09:10:56.686702] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.734 [2024-11-20 09:10:56.686841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.734 [2024-11-20 09:10:56.686859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.734 [2024-11-20 09:10:56.691057] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.734 [2024-11-20 09:10:56.691247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.734 [2024-11-20 09:10:56.691265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.734 [2024-11-20 09:10:56.695605] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.734 [2024-11-20 09:10:56.695742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.734 [2024-11-20 09:10:56.695761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.734 [2024-11-20 09:10:56.700142] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.734 [2024-11-20 09:10:56.700309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.734 [2024-11-20 09:10:56.700328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.734 [2024-11-20 09:10:56.704879] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.734 [2024-11-20 09:10:56.705040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.734 [2024-11-20 09:10:56.705059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.734 [2024-11-20 09:10:56.709117] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.734 [2024-11-20 09:10:56.709267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.734 [2024-11-20 09:10:56.709286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.734 [2024-11-20 09:10:56.713657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.734 [2024-11-20 09:10:56.713803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.734 [2024-11-20 09:10:56.713822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.734 [2024-11-20 09:10:56.718372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.734 [2024-11-20 09:10:56.718486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.734 [2024-11-20 09:10:56.718504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.734 [2024-11-20 09:10:56.723021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.734 [2024-11-20 09:10:56.723148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.734 [2024-11-20 09:10:56.723167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.734 [2024-11-20 09:10:56.727628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.734 [2024-11-20 09:10:56.727763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.734 [2024-11-20 09:10:56.727781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.734 [2024-11-20 09:10:56.732429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.734 [2024-11-20 09:10:56.732589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.734 [2024-11-20 09:10:56.732608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.734 [2024-11-20 09:10:56.736964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.734 [2024-11-20 09:10:56.737130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.734 [2024-11-20 09:10:56.737148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.734 [2024-11-20 09:10:56.741513] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.734 [2024-11-20 09:10:56.741689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.734 [2024-11-20 09:10:56.741707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.734 [2024-11-20 09:10:56.745910] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.734 [2024-11-20 09:10:56.746313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.734 [2024-11-20 09:10:56.746333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.734 [2024-11-20 09:10:56.750678] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.734 [2024-11-20 09:10:56.750877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.734 [2024-11-20 09:10:56.750902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.734 [2024-11-20 09:10:56.755149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.734 [2024-11-20 09:10:56.755299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.734 [2024-11-20 09:10:56.755318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.734 [2024-11-20 09:10:56.759231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.734 [2024-11-20 09:10:56.759376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.734 [2024-11-20 09:10:56.759393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.734 [2024-11-20 09:10:56.763145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.734 [2024-11-20 09:10:56.763311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.734 [2024-11-20 09:10:56.763328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.734 [2024-11-20 09:10:56.767126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.734 [2024-11-20 09:10:56.767314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.734 [2024-11-20 09:10:56.767333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 09:10:56.771221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.995 [2024-11-20 09:10:56.771394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 09:10:56.771414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 09:10:56.776126] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.995 [2024-11-20 09:10:56.776300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 09:10:56.776319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 09:10:56.780158] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.995 [2024-11-20 09:10:56.780332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 09:10:56.780355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 09:10:56.784067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.995 [2024-11-20 09:10:56.784238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 09:10:56.784258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 09:10:56.787962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.995 [2024-11-20 09:10:56.788142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 09:10:56.788160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 09:10:56.791859] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.995 [2024-11-20 09:10:56.792020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 09:10:56.792039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 09:10:56.795687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.995 [2024-11-20 09:10:56.795868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 09:10:56.795886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 09:10:56.799499] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.995 [2024-11-20 09:10:56.799657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 09:10:56.799675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 09:10:56.803326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.995 [2024-11-20 09:10:56.803492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 09:10:56.803509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 09:10:56.807099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.995 [2024-11-20 09:10:56.807267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 09:10:56.807285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 09:10:56.810911] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.995 [2024-11-20 09:10:56.811101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 09:10:56.811119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 09:10:56.814709] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.995 [2024-11-20 09:10:56.814904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 09:10:56.814922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 09:10:56.818529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.995 [2024-11-20 09:10:56.818700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 09:10:56.818718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 09:10:56.822331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.995 [2024-11-20 09:10:56.822500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 09:10:56.822518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 09:10:56.826141] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.995 [2024-11-20 09:10:56.826321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 09:10:56.826339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 09:10:56.829962] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.995 [2024-11-20 09:10:56.830135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 09:10:56.830153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 09:10:56.833732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.995 [2024-11-20 09:10:56.833898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 09:10:56.833916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 09:10:56.837527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.995 [2024-11-20 09:10:56.837695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 09:10:56.837713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 09:10:56.841300] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.995 [2024-11-20 09:10:56.841473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 09:10:56.841491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 09:10:56.845125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.995 [2024-11-20 09:10:56.845287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 09:10:56.845306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 09:10:56.848887] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.995 [2024-11-20 09:10:56.849077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 09:10:56.849095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 09:10:56.852680] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.995 [2024-11-20 09:10:56.852867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 09:10:56.852885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 09:10:56.856503] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.995 [2024-11-20 09:10:56.856676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 09:10:56.856694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 09:10:56.860292] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.995 [2024-11-20 09:10:56.860465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 09:10:56.860483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 09:10:56.864091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.995 [2024-11-20 09:10:56.864250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 09:10:56.864268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.995 [2024-11-20 09:10:56.867960] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.995 [2024-11-20 09:10:56.868114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.995 [2024-11-20 09:10:56.868133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.996 [2024-11-20 09:10:56.872279] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.996 [2024-11-20 09:10:56.872432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.996 [2024-11-20 09:10:56.872449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.996 [2024-11-20 09:10:56.876780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.996 [2024-11-20 09:10:56.876925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.996 [2024-11-20 09:10:56.876943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.996 [2024-11-20 09:10:56.881433] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.996 [2024-11-20 09:10:56.881587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.996 [2024-11-20 09:10:56.881608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.996 [2024-11-20 09:10:56.886295] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.996 [2024-11-20 09:10:56.886453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.996 [2024-11-20 09:10:56.886471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.996 [2024-11-20 09:10:56.890701] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.996 [2024-11-20 09:10:56.890856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.996 [2024-11-20 09:10:56.890874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.996 [2024-11-20 09:10:56.895238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.996 [2024-11-20 09:10:56.895385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.996 [2024-11-20 09:10:56.895403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.996 [2024-11-20 09:10:56.899711] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.996 [2024-11-20 09:10:56.899863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.996 [2024-11-20 09:10:56.899880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.996 [2024-11-20 09:10:56.904324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.996 [2024-11-20 09:10:56.904477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.996 [2024-11-20 09:10:56.904495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.996 [2024-11-20 09:10:56.908496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.996 [2024-11-20 09:10:56.908681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.996 [2024-11-20 09:10:56.908698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.996 [2024-11-20 09:10:56.912429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.996 [2024-11-20 09:10:56.912586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.996 [2024-11-20 09:10:56.912604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.996 [2024-11-20 09:10:56.916369] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.996 [2024-11-20 09:10:56.916544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.996 [2024-11-20 09:10:56.916562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.996 [2024-11-20 09:10:56.920349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.996 [2024-11-20 09:10:56.920500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.996 [2024-11-20 09:10:56.920518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.996 [2024-11-20 09:10:56.924424] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.996 [2024-11-20 09:10:56.924587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.996 [2024-11-20 09:10:56.924605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.996 [2024-11-20 09:10:56.928287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.996 [2024-11-20 09:10:56.928469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.996 [2024-11-20 09:10:56.928487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.996 [2024-11-20 09:10:56.932270] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.996 [2024-11-20 09:10:56.932431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.996 [2024-11-20 09:10:56.932449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.996 [2024-11-20 09:10:56.936162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.996 [2024-11-20 09:10:56.936331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.996 [2024-11-20 09:10:56.936349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.996 [2024-11-20 09:10:56.940104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.996 [2024-11-20 09:10:56.940259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.996 [2024-11-20 09:10:56.940277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.996 [2024-11-20 09:10:56.944135] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.996 [2024-11-20 09:10:56.944295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.996 [2024-11-20 09:10:56.944313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.996 [2024-11-20 09:10:56.948259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.996 [2024-11-20 09:10:56.948419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.996 [2024-11-20 09:10:56.948437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.996 [2024-11-20 09:10:56.952382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.996 [2024-11-20 09:10:56.952552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.996 [2024-11-20 09:10:56.952570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.996 [2024-11-20 09:10:56.956276] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.996 [2024-11-20 09:10:56.956439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.996 [2024-11-20 09:10:56.956457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.996 [2024-11-20 09:10:56.960206] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.996 [2024-11-20 09:10:56.960360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.996 [2024-11-20 09:10:56.960377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.996 [2024-11-20 09:10:56.964082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.996 [2024-11-20 09:10:56.964245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.996 [2024-11-20 09:10:56.964263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.996 [2024-11-20 09:10:56.968054] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.996 [2024-11-20 09:10:56.968218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.996 [2024-11-20 09:10:56.968237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.996 [2024-11-20 09:10:56.971959] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.996 [2024-11-20 09:10:56.972118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.996 [2024-11-20 09:10:56.972135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.996 [2024-11-20 09:10:56.975813] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.996 [2024-11-20 09:10:56.975971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.997 [2024-11-20 09:10:56.975989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.997 [2024-11-20 09:10:56.979755] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.997 [2024-11-20 09:10:56.979921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.997 [2024-11-20 09:10:56.979939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.997 [2024-11-20 09:10:56.984069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.997 [2024-11-20 09:10:56.984233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.997 [2024-11-20 09:10:56.984251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.997 [2024-11-20 09:10:56.988474] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.997 [2024-11-20 09:10:56.988641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.997 [2024-11-20 09:10:56.988664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.997 [2024-11-20 09:10:56.993411] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.997 [2024-11-20 09:10:56.993560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.997 [2024-11-20 09:10:56.993578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.997 [2024-11-20 09:10:56.998018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.997 [2024-11-20 09:10:56.998138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.997 [2024-11-20 09:10:56.998156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.997 [2024-11-20 09:10:57.002445] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.997 [2024-11-20 09:10:57.002611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.997 [2024-11-20 09:10:57.002629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.997 [2024-11-20 09:10:57.007140] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.997 [2024-11-20 09:10:57.007303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.997 [2024-11-20 09:10:57.007321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.997 [2024-11-20 09:10:57.011931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.997 [2024-11-20 09:10:57.012082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.997 [2024-11-20 09:10:57.012100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.997 [2024-11-20 09:10:57.016118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.997 [2024-11-20 09:10:57.016278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.997 [2024-11-20 09:10:57.016297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:40.997 [2024-11-20 09:10:57.020071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.997 [2024-11-20 09:10:57.020244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.997 [2024-11-20 09:10:57.020261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:40.997 [2024-11-20 09:10:57.024064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.997 [2024-11-20 09:10:57.024218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.997 [2024-11-20 09:10:57.024236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:40.997 [2024-11-20 09:10:57.028082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.997 [2024-11-20 09:10:57.028269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.997 [2024-11-20 09:10:57.028288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:40.997 [2024-11-20 09:10:57.032189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:40.997 [2024-11-20 09:10:57.032387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:40.997 [2024-11-20 09:10:57.032408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.256 [2024-11-20 09:10:57.036193] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.256 [2024-11-20 09:10:57.036370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.256 [2024-11-20 09:10:57.036392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.257 [2024-11-20 09:10:57.040378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.257 [2024-11-20 09:10:57.040558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.257 [2024-11-20 09:10:57.040579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.257 [2024-11-20 09:10:57.045223] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.257 [2024-11-20 09:10:57.045380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.257 [2024-11-20 09:10:57.045398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.257 [2024-11-20 09:10:57.050598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.257 [2024-11-20 09:10:57.050800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.257 [2024-11-20 09:10:57.050820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.257 [2024-11-20 09:10:57.057130] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.257 [2024-11-20 09:10:57.057308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.257 [2024-11-20 09:10:57.057327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.257 [2024-11-20 09:10:57.063118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.257 [2024-11-20 09:10:57.063257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.257 [2024-11-20 09:10:57.063276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.257 [2024-11-20 09:10:57.068706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.257 [2024-11-20 09:10:57.068846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.257 [2024-11-20 09:10:57.068864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.257 [2024-11-20 09:10:57.073981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.257 [2024-11-20 09:10:57.074196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.257 [2024-11-20 09:10:57.074215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.257 [2024-11-20 09:10:57.079714] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.257 [2024-11-20 09:10:57.079970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.257 [2024-11-20 09:10:57.079989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.257 [2024-11-20 09:10:57.086314] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.257 [2024-11-20 09:10:57.086507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.257 [2024-11-20 09:10:57.086524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.257 [2024-11-20 09:10:57.091320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.257 [2024-11-20 09:10:57.091521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.257 [2024-11-20 09:10:57.091548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.257 [2024-11-20 09:10:57.095737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.257 [2024-11-20 09:10:57.095880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.257 [2024-11-20 09:10:57.095898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.257 [2024-11-20 09:10:57.099978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.257 [2024-11-20 09:10:57.100145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.257 [2024-11-20 09:10:57.100164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.257 [2024-11-20 09:10:57.104245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.257 [2024-11-20 09:10:57.104410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.257 [2024-11-20 09:10:57.104428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.257 [2024-11-20 09:10:57.109442] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.257 [2024-11-20 09:10:57.109642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.257 [2024-11-20 09:10:57.109660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.257 [2024-11-20 09:10:57.113823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.257 [2024-11-20 09:10:57.114016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.257 [2024-11-20 09:10:57.114039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.257 [2024-11-20 09:10:57.117977] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.257 [2024-11-20 09:10:57.118150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.257 [2024-11-20 09:10:57.118167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.257 [2024-11-20 09:10:57.122021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.257 [2024-11-20 09:10:57.122216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.257 [2024-11-20 09:10:57.122234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.257 [2024-11-20 09:10:57.127710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.257 [2024-11-20 09:10:57.128210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.257 [2024-11-20 09:10:57.128230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.257 [2024-11-20 09:10:57.133467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.257 [2024-11-20 09:10:57.133639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.257 [2024-11-20 09:10:57.133658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.257 [2024-11-20 09:10:57.138978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.257 [2024-11-20 09:10:57.139148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.257 [2024-11-20 09:10:57.139166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.257 [2024-11-20 09:10:57.145230] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.257 [2024-11-20 09:10:57.145455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.257 [2024-11-20 09:10:57.145475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.257 [2024-11-20 09:10:57.150470] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.257 [2024-11-20 09:10:57.150620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.257 [2024-11-20 09:10:57.150638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.257 [2024-11-20 09:10:57.154679] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.257 [2024-11-20 09:10:57.154829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.257 [2024-11-20 09:10:57.154847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.257 [2024-11-20 09:10:57.159028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.257 [2024-11-20 09:10:57.159182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.257 [2024-11-20 09:10:57.159201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.257 [2024-11-20 09:10:57.163802] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.257 [2024-11-20 09:10:57.163958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.257 [2024-11-20 09:10:57.163976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.257 [2024-11-20 09:10:57.168075] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.257 [2024-11-20 09:10:57.168228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.257 [2024-11-20 09:10:57.168246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.257 [2024-11-20 09:10:57.173473] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.257 [2024-11-20 09:10:57.173721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.257 [2024-11-20 09:10:57.173740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.258 [2024-11-20 09:10:57.179495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.258 [2024-11-20 09:10:57.179686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.258 [2024-11-20 09:10:57.179705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.258 [2024-11-20 09:10:57.185069] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.258 [2024-11-20 09:10:57.185283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:64 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.258 [2024-11-20 09:10:57.185302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.258 [2024-11-20 09:10:57.190603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.258 [2024-11-20 09:10:57.190728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.258 [2024-11-20 09:10:57.190747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.258 [2024-11-20 09:10:57.196145] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.258 [2024-11-20 09:10:57.196339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.258 [2024-11-20 09:10:57.196365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.258 [2024-11-20 09:10:57.202652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.258 [2024-11-20 09:10:57.202913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.258 [2024-11-20 09:10:57.202933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.258 [2024-11-20 09:10:57.208754] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.258 [2024-11-20 09:10:57.208889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.258 [2024-11-20 09:10:57.208907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.258 [2024-11-20 09:10:57.214264] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.258 [2024-11-20 09:10:57.214471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.258 [2024-11-20 09:10:57.214495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.258 [2024-11-20 09:10:57.220316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.258 [2024-11-20 09:10:57.220442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.258 [2024-11-20 09:10:57.220460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.258 [2024-11-20 09:10:57.225537] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.258 [2024-11-20 09:10:57.225680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.258 [2024-11-20 09:10:57.225698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.258 [2024-11-20 09:10:57.230706] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.258 [2024-11-20 09:10:57.230899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.258 [2024-11-20 09:10:57.230917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.258 [2024-11-20 09:10:57.235941] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.258 [2024-11-20 09:10:57.236132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.258 [2024-11-20 09:10:57.236151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.258 [2024-11-20 09:10:57.241027] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.258 [2024-11-20 09:10:57.241204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.258 [2024-11-20 09:10:57.241222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.258 [2024-11-20 09:10:57.245994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.258 [2024-11-20 09:10:57.246123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.258 [2024-11-20 09:10:57.246141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.258 [2024-11-20 09:10:57.251188] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.258 [2024-11-20 09:10:57.251363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.258 [2024-11-20 09:10:57.251385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:26:41.258 [2024-11-20 09:10:57.256265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.258 [2024-11-20 09:10:57.256450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.258 [2024-11-20 09:10:57.256468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:26:41.258 [2024-11-20 09:10:57.260514] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.258 [2024-11-20 09:10:57.260694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.258 [2024-11-20 09:10:57.260712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:26:41.258 6951.00 IOPS, 868.88 MiB/s [2024-11-20T08:10:57.299Z] [2024-11-20 09:10:57.265725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x1d40980) with pdu=0x2000166ff3c8 00:26:41.258 [2024-11-20 09:10:57.265828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:41.258 [2024-11-20 09:10:57.265846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:26:41.258 00:26:41.258 Latency(us) 00:26:41.258 [2024-11-20T08:10:57.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:41.258 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:41.258 nvme0n1 : 2.00 6945.99 868.25 0.00 0.00 2298.83 1702.51 6838.54 00:26:41.258 [2024-11-20T08:10:57.299Z] =================================================================================================================== 00:26:41.258 [2024-11-20T08:10:57.299Z] Total : 6945.99 868.25 0.00 0.00 2298.83 1702.51 6838.54 00:26:41.258 { 00:26:41.258 "results": [ 00:26:41.258 { 00:26:41.258 "job": "nvme0n1", 00:26:41.258 "core_mask": "0x2", 00:26:41.258 "workload": "randwrite", 00:26:41.258 "status": "finished", 00:26:41.258 "queue_depth": 16, 00:26:41.258 "io_size": 131072, 00:26:41.258 "runtime": 2.004178, 00:26:41.258 "iops": 6945.989827250873, 00:26:41.258 "mibps": 868.2487284063591, 00:26:41.258 "io_failed": 0, 00:26:41.258 "io_timeout": 0, 00:26:41.258 "avg_latency_us": 2298.8325551325333, 00:26:41.258 "min_latency_us": 1702.5113043478261, 00:26:41.258 "max_latency_us": 6838.539130434782 00:26:41.258 } 00:26:41.258 ], 00:26:41.258 "core_count": 1 00:26:41.258 } 00:26:41.258 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:41.258 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:41.258 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:41.258 | .driver_specific 00:26:41.258 | .nvme_error 00:26:41.258 | .status_code 00:26:41.258 | .command_transient_transport_error' 00:26:41.258 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:41.516 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 450 > 0 )) 00:26:41.516 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2488827 00:26:41.516 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2488827 ']' 00:26:41.516 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2488827 00:26:41.516 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:41.516 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:41.516 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2488827 00:26:41.516 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:41.516 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:41.516 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2488827' 00:26:41.516 killing process with pid 2488827 00:26:41.516 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2488827 00:26:41.517 Received shutdown signal, test time was about 2.000000 seconds 00:26:41.517 00:26:41.517 Latency(us) 00:26:41.517 [2024-11-20T08:10:57.558Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:41.517 [2024-11-20T08:10:57.558Z] =================================================================================================================== 00:26:41.517 [2024-11-20T08:10:57.558Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:41.517 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2488827 00:26:41.775 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2486964 00:26:41.775 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 2486964 ']' 00:26:41.775 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 2486964 00:26:41.775 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:26:41.775 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:41.775 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2486964 00:26:41.775 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:41.775 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:41.775 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2486964' 00:26:41.775 killing process with pid 2486964 00:26:41.775 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 2486964 00:26:41.775 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 2486964 00:26:42.034 00:26:42.034 real 0m14.097s 00:26:42.034 user 0m27.089s 00:26:42.034 sys 0m4.565s 00:26:42.034 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:42.034 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:42.034 ************************************ 00:26:42.034 END TEST nvmf_digest_error 00:26:42.034 ************************************ 00:26:42.034 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:42.034 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:42.034 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:42.034 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@99 -- # sync 00:26:42.034 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:42.034 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@102 -- # set +e 00:26:42.034 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:42.034 09:10:57 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:42.034 rmmod nvme_tcp 00:26:42.034 rmmod nvme_fabrics 00:26:42.034 rmmod nvme_keyring 00:26:42.034 09:10:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:42.034 09:10:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@106 -- # set -e 00:26:42.034 09:10:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@107 -- # return 0 00:26:42.034 09:10:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # '[' -n 2486964 ']' 00:26:42.034 09:10:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@337 -- # killprocess 2486964 00:26:42.034 09:10:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 2486964 ']' 00:26:42.034 09:10:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 2486964 00:26:42.034 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2486964) - No such process 00:26:42.034 09:10:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 2486964 is not found' 00:26:42.034 Process with pid 2486964 is not found 00:26:42.034 09:10:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:42.034 09:10:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # nvmf_fini 00:26:42.034 09:10:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@264 -- # local dev 00:26:42.034 09:10:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@267 -- # remove_target_ns 00:26:42.034 09:10:58 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:42.034 09:10:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:42.034 09:10:58 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@268 -- # delete_main_bridge 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@130 -- # return 0 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@41 -- # _dev=0 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@41 -- # dev_map=() 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@284 -- # iptr 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@542 -- # iptables-save 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@542 -- # iptables-restore 00:26:44.639 00:26:44.639 real 0m36.846s 00:26:44.639 user 0m56.140s 00:26:44.639 sys 0m13.888s 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:44.639 ************************************ 00:26:44.639 END TEST nvmf_digest 00:26:44.639 ************************************ 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:44.639 ************************************ 00:26:44.639 START TEST nvmf_host_discovery 00:26:44.639 ************************************ 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:26:44.639 * Looking for test storage... 00:26:44.639 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:26:44.639 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:44.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.640 --rc genhtml_branch_coverage=1 00:26:44.640 --rc genhtml_function_coverage=1 00:26:44.640 --rc genhtml_legend=1 00:26:44.640 --rc geninfo_all_blocks=1 00:26:44.640 --rc geninfo_unexecuted_blocks=1 00:26:44.640 00:26:44.640 ' 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:44.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.640 --rc genhtml_branch_coverage=1 00:26:44.640 --rc genhtml_function_coverage=1 00:26:44.640 --rc genhtml_legend=1 00:26:44.640 --rc geninfo_all_blocks=1 00:26:44.640 --rc geninfo_unexecuted_blocks=1 00:26:44.640 00:26:44.640 ' 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:44.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.640 --rc genhtml_branch_coverage=1 00:26:44.640 --rc genhtml_function_coverage=1 00:26:44.640 --rc genhtml_legend=1 00:26:44.640 --rc geninfo_all_blocks=1 00:26:44.640 --rc geninfo_unexecuted_blocks=1 00:26:44.640 00:26:44.640 ' 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:44.640 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:44.640 --rc genhtml_branch_coverage=1 00:26:44.640 --rc genhtml_function_coverage=1 00:26:44.640 --rc genhtml_legend=1 00:26:44.640 --rc geninfo_all_blocks=1 00:26:44.640 --rc geninfo_unexecuted_blocks=1 00:26:44.640 00:26:44.640 ' 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@50 -- # : 0 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:44.640 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:44.640 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:44.641 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:44.641 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:44.641 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # DISCOVERY_PORT=8009 00:26:44.641 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:26:44.641 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@15 -- # NQN=nqn.2016-06.io.spdk:cnode 00:26:44.641 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:26:44.641 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@18 -- # HOST_SOCK=/tmp/host.sock 00:26:44.641 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # nvmftestinit 00:26:44.641 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:26:44.641 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:44.641 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # prepare_net_devs 00:26:44.641 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # local -g is_hw=no 00:26:44.641 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # remove_target_ns 00:26:44.641 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:44.641 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:44.641 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:44.641 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:26:44.641 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:26:44.641 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # xtrace_disable 00:26:44.641 09:11:00 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@131 -- # pci_devs=() 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@131 -- # local -a pci_devs 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@132 -- # pci_net_devs=() 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@133 -- # pci_drivers=() 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@133 -- # local -A pci_drivers 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@135 -- # net_devs=() 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@135 -- # local -ga net_devs 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@136 -- # e810=() 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@136 -- # local -ga e810 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@137 -- # x722=() 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@137 -- # local -ga x722 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@138 -- # mlx=() 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@138 -- # local -ga mlx 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:51.211 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:51.211 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:51.211 Found net devices under 0000:86:00.0: cvl_0_0 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:51.211 Found net devices under 0000:86:00.1: cvl_0_1 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # is_hw=yes 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@257 -- # create_target_ns 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@28 -- # local -g _dev 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # ips=() 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:26:51.211 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772161 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:26:51.212 10.0.0.1 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772162 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:26:51.212 10.0.0.2 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@38 -- # ping_ips 1 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@107 -- # local dev=initiator0 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:51.212 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:51.212 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.397 ms 00:26:51.212 00:26:51.212 --- 10.0.0.1 ping statistics --- 00:26:51.212 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.212 rtt min/avg/max/mdev = 0.397/0.397/0.397/0.000 ms 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # get_net_dev target0 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@107 -- # local dev=target0 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:26:51.212 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:26:51.212 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:51.212 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.107 ms 00:26:51.212 00:26:51.212 --- 10.0.0.2 ping statistics --- 00:26:51.213 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:51.213 rtt min/avg/max/mdev = 0.107/0.107/0.107/0.000 ms 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # (( pair++ )) 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # return 0 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@107 -- # local dev=initiator0 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@107 -- # local dev=initiator1 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # return 1 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # dev= 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@169 -- # return 0 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # get_net_dev target0 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@107 -- # local dev=target0 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # get_net_dev target1 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@107 -- # local dev=target1 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # return 1 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # dev= 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@169 -- # return 0 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmfappstart -m 0x2 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # nvmfpid=2492856 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # waitforlisten 2492856 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2492856 ']' 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:51.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.213 [2024-11-20 09:11:06.470567] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:26:51.213 [2024-11-20 09:11:06.470612] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:51.213 [2024-11-20 09:11:06.549410] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.213 [2024-11-20 09:11:06.590154] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:51.213 [2024-11-20 09:11:06.590200] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:51.213 [2024-11-20 09:11:06.590207] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:51.213 [2024-11-20 09:11:06.590213] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:51.213 [2024-11-20 09:11:06.590219] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:51.213 [2024-11-20 09:11:06.590773] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.213 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.213 [2024-11-20 09:11:06.724747] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:51.214 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.214 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:26:51.214 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.214 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.214 [2024-11-20 09:11:06.736934] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:26:51.214 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.214 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # rpc_cmd bdev_null_create null0 1000 512 00:26:51.214 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.214 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.214 null0 00:26:51.214 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.214 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@31 -- # rpc_cmd bdev_null_create null1 1000 512 00:26:51.214 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.214 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.214 null1 00:26:51.214 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.214 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd bdev_wait_for_examine 00:26:51.214 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.214 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.214 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.214 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@40 -- # hostpid=2492881 00:26:51.214 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:26:51.214 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@41 -- # waitforlisten 2492881 /tmp/host.sock 00:26:51.214 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 2492881 ']' 00:26:51.214 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:26:51.214 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:51.214 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:26:51.214 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:26:51.214 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:51.214 09:11:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.214 [2024-11-20 09:11:06.819019] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:26:51.214 [2024-11-20 09:11:06.819062] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2492881 ] 00:26:51.214 [2024-11-20 09:11:06.895193] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:51.214 [2024-11-20 09:11:06.938174] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@43 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # notify_id=0 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@78 -- # get_subsystem_names 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # get_bdev_list 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@82 -- # get_subsystem_names 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_bdev_list 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.214 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # get_subsystem_names 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_bdev_list 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.474 [2024-11-20 09:11:07.350493] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_subsystem_names 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@93 -- # get_bdev_list 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@94 -- # is_notification_count_eq 0 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=0 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=0 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=0 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@100 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:26:51.474 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:51.732 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:26:51.732 09:11:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:52.308 [2024-11-20 09:11:08.050860] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:52.308 [2024-11-20 09:11:08.050879] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:52.308 [2024-11-20 09:11:08.050890] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:52.308 [2024-11-20 09:11:08.179289] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:26:52.308 [2024-11-20 09:11:08.281930] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:26:52.308 [2024-11-20 09:11:08.282710] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1a59dd0:1 started. 00:26:52.308 [2024-11-20 09:11:08.284131] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:52.308 [2024-11-20 09:11:08.284146] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:52.308 [2024-11-20 09:11:08.289984] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1a59dd0 was disconnected and freed. delete nvme_qpair. 00:26:52.566 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:52.566 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:52.566 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:52.566 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:52.566 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:26:52.566 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.566 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:26:52.566 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.566 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:26:52.566 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.566 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:52.566 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:52.566 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@101 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:52.566 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:26:52.566 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:52.566 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:52.566 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:26:52.566 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:52.566 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:52.566 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:26:52.566 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.566 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:26:52.566 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.566 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@102 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # sort -n 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # xargs 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # is_notification_count_eq 1 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=1 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=1 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=1 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:52.827 [2024-11-20 09:11:08.744843] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1a66f90:1 started. 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:26:52.827 [2024-11-20 09:11:08.751160] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1a66f90 was disconnected and freed. delete nvme_qpair. 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@109 -- # is_notification_count_eq 1 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=1 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=1 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=2 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.827 [2024-11-20 09:11:08.846839] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:26:52.827 [2024-11-20 09:11:08.847698] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:52.827 [2024-11-20 09:11:08.847718] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@115 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:52.827 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:52.828 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:52.828 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:52.828 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:26:52.828 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:52.828 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:52.828 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:26:52.828 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:52.828 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@116 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@117 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # sort -n 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # xargs 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:53.087 [2024-11-20 09:11:08.975128] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:26:53.087 09:11:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:26:53.345 [2024-11-20 09:11:09.278616] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:26:53.345 [2024-11-20 09:11:09.278651] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:53.345 [2024-11-20 09:11:09.278659] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:26:53.345 [2024-11-20 09:11:09.278664] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:54.284 09:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:54.284 09:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:26:54.284 09:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:54.284 09:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:54.284 09:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:54.284 09:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.284 09:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # sort -n 00:26:54.284 09:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.284 09:11:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # xargs 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # is_notification_count_eq 0 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=0 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=0 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=2 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.284 [2024-11-20 09:11:10.094802] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:26:54.284 [2024-11-20 09:11:10.094828] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:54.284 [2024-11-20 09:11:10.096550] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.284 [2024-11-20 09:11:10.096570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.284 [2024-11-20 09:11:10.096579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.284 [2024-11-20 09:11:10.096587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.284 [2024-11-20 09:11:10.096595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.284 [2024-11-20 09:11:10.096602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.284 [2024-11-20 09:11:10.096609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:26:54.284 [2024-11-20 09:11:10.096616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:54.284 [2024-11-20 09:11:10.096624] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a390 is same with the state(6) to be set 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@124 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:26:54.284 [2024-11-20 09:11:10.106560] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a2a390 (9): Bad file descriptor 00:26:54.284 [2024-11-20 09:11:10.116596] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:54.284 [2024-11-20 09:11:10.116609] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:54.284 [2024-11-20 09:11:10.116614] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:54.284 [2024-11-20 09:11:10.116619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:54.284 [2024-11-20 09:11:10.116643] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:54.284 [2024-11-20 09:11:10.116902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.284 [2024-11-20 09:11:10.116917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2a390 with addr=10.0.0.2, port=4420 00:26:54.284 [2024-11-20 09:11:10.116925] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a390 is same with the state(6) to be set 00:26:54.284 [2024-11-20 09:11:10.116938] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a2a390 (9): Bad file descriptor 00:26:54.284 [2024-11-20 09:11:10.116961] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:54.284 [2024-11-20 09:11:10.116969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:54.284 [2024-11-20 09:11:10.116977] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:54.284 [2024-11-20 09:11:10.116984] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:54.284 [2024-11-20 09:11:10.116989] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:54.284 [2024-11-20 09:11:10.116993] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:54.284 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.284 [2024-11-20 09:11:10.126673] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:54.284 [2024-11-20 09:11:10.126685] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:54.285 [2024-11-20 09:11:10.126689] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:54.285 [2024-11-20 09:11:10.126694] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:54.285 [2024-11-20 09:11:10.126707] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:54.285 [2024-11-20 09:11:10.126876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.285 [2024-11-20 09:11:10.126889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2a390 with addr=10.0.0.2, port=4420 00:26:54.285 [2024-11-20 09:11:10.126896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a390 is same with the state(6) to be set 00:26:54.285 [2024-11-20 09:11:10.126907] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a2a390 (9): Bad file descriptor 00:26:54.285 [2024-11-20 09:11:10.126917] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:54.285 [2024-11-20 09:11:10.126923] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:54.285 [2024-11-20 09:11:10.126930] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:54.285 [2024-11-20 09:11:10.126939] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:54.285 [2024-11-20 09:11:10.126944] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:54.285 [2024-11-20 09:11:10.126954] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:54.285 [2024-11-20 09:11:10.136737] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:54.285 [2024-11-20 09:11:10.136748] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:54.285 [2024-11-20 09:11:10.136752] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:54.285 [2024-11-20 09:11:10.136756] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:54.285 [2024-11-20 09:11:10.136768] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:54.285 [2024-11-20 09:11:10.137017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.285 [2024-11-20 09:11:10.137030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2a390 with addr=10.0.0.2, port=4420 00:26:54.285 [2024-11-20 09:11:10.137037] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a390 is same with the state(6) to be set 00:26:54.285 [2024-11-20 09:11:10.137048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a2a390 (9): Bad file descriptor 00:26:54.285 [2024-11-20 09:11:10.137089] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:54.285 [2024-11-20 09:11:10.137097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:54.285 [2024-11-20 09:11:10.137104] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:54.285 [2024-11-20 09:11:10.137110] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:54.285 [2024-11-20 09:11:10.137115] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:54.285 [2024-11-20 09:11:10.137118] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:54.285 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:26:54.285 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:54.285 [2024-11-20 09:11:10.146800] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:54.285 [2024-11-20 09:11:10.146814] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:54.285 [2024-11-20 09:11:10.146819] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:54.285 [2024-11-20 09:11:10.146822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:54.285 [2024-11-20 09:11:10.146836] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:54.285 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@125 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:54.285 [2024-11-20 09:11:10.147068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.285 [2024-11-20 09:11:10.147084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2a390 with addr=10.0.0.2, port=4420 00:26:54.285 [2024-11-20 09:11:10.147091] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a390 is same with the state(6) to be set 00:26:54.285 [2024-11-20 09:11:10.147105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a2a390 (9): Bad file descriptor 00:26:54.285 [2024-11-20 09:11:10.147121] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:54.285 [2024-11-20 09:11:10.147127] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:54.285 [2024-11-20 09:11:10.147134] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:54.285 [2024-11-20 09:11:10.147140] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:54.285 [2024-11-20 09:11:10.147144] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:54.285 [2024-11-20 09:11:10.147148] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:54.285 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:26:54.285 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:54.285 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:54.285 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:26:54.285 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:54.285 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:54.285 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:26:54.285 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.285 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:26:54.285 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.285 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:26:54.285 [2024-11-20 09:11:10.156866] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:54.285 [2024-11-20 09:11:10.156878] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:54.285 [2024-11-20 09:11:10.156883] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:54.285 [2024-11-20 09:11:10.156886] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:54.285 [2024-11-20 09:11:10.156899] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:54.285 [2024-11-20 09:11:10.157149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.285 [2024-11-20 09:11:10.157161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2a390 with addr=10.0.0.2, port=4420 00:26:54.285 [2024-11-20 09:11:10.157168] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a390 is same with the state(6) to be set 00:26:54.285 [2024-11-20 09:11:10.157178] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a2a390 (9): Bad file descriptor 00:26:54.285 [2024-11-20 09:11:10.157195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:54.285 [2024-11-20 09:11:10.157202] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:54.285 [2024-11-20 09:11:10.157209] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:54.286 [2024-11-20 09:11:10.157215] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:54.286 [2024-11-20 09:11:10.157219] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:54.286 [2024-11-20 09:11:10.157226] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:54.286 [2024-11-20 09:11:10.166931] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:54.286 [2024-11-20 09:11:10.166944] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:54.286 [2024-11-20 09:11:10.166953] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:54.286 [2024-11-20 09:11:10.166957] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:54.286 [2024-11-20 09:11:10.166970] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:54.286 [2024-11-20 09:11:10.167160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.286 [2024-11-20 09:11:10.167172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2a390 with addr=10.0.0.2, port=4420 00:26:54.286 [2024-11-20 09:11:10.167180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a390 is same with the state(6) to be set 00:26:54.286 [2024-11-20 09:11:10.167190] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a2a390 (9): Bad file descriptor 00:26:54.286 [2024-11-20 09:11:10.167200] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:54.286 [2024-11-20 09:11:10.167206] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:54.286 [2024-11-20 09:11:10.167213] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:54.286 [2024-11-20 09:11:10.167219] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:54.286 [2024-11-20 09:11:10.167223] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:54.286 [2024-11-20 09:11:10.167227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:54.286 [2024-11-20 09:11:10.177001] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:26:54.286 [2024-11-20 09:11:10.177011] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:26:54.286 [2024-11-20 09:11:10.177015] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:26:54.286 [2024-11-20 09:11:10.177019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:26:54.286 [2024-11-20 09:11:10.177032] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:26:54.286 [2024-11-20 09:11:10.177296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:54.286 [2024-11-20 09:11:10.177307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a2a390 with addr=10.0.0.2, port=4420 00:26:54.286 [2024-11-20 09:11:10.177314] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a2a390 is same with the state(6) to be set 00:26:54.286 [2024-11-20 09:11:10.177324] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a2a390 (9): Bad file descriptor 00:26:54.286 [2024-11-20 09:11:10.177334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:26:54.286 [2024-11-20 09:11:10.177340] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:26:54.286 [2024-11-20 09:11:10.177347] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:26:54.286 [2024-11-20 09:11:10.177352] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:26:54.286 [2024-11-20 09:11:10.177359] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:26:54.286 [2024-11-20 09:11:10.177363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:26:54.286 [2024-11-20 09:11:10.182370] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:26:54.286 [2024-11-20 09:11:10.182386] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@126 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # sort -n 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # xargs 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # is_notification_count_eq 0 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=0 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=0 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=2 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:26:54.286 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:26:54.287 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:26:54.287 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.287 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.287 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:26:54.287 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.559 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@133 -- # is_notification_count_eq 2 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=2 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=2 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=4 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:54.560 09:11:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:55.496 [2024-11-20 09:11:11.490113] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:26:55.496 [2024-11-20 09:11:11.490131] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:26:55.496 [2024-11-20 09:11:11.490144] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:26:55.755 [2024-11-20 09:11:11.576405] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:26:56.094 [2024-11-20 09:11:11.880684] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:26:56.094 [2024-11-20 09:11:11.881305] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x1a29950:1 started. 00:26:56.094 [2024-11-20 09:11:11.882923] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:26:56.094 [2024-11-20 09:11:11.882952] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:56.094 [2024-11-20 09:11:11.889334] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x1a29950 was disconnected and freed. delete nvme_qpair. 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.094 request: 00:26:56.094 { 00:26:56.094 "name": "nvme", 00:26:56.094 "trtype": "tcp", 00:26:56.094 "traddr": "10.0.0.2", 00:26:56.094 "adrfam": "ipv4", 00:26:56.094 "trsvcid": "8009", 00:26:56.094 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:56.094 "wait_for_attach": true, 00:26:56.094 "method": "bdev_nvme_start_discovery", 00:26:56.094 "req_id": 1 00:26:56.094 } 00:26:56.094 Got JSON-RPC error response 00:26:56.094 response: 00:26:56.094 { 00:26:56.094 "code": -17, 00:26:56.094 "message": "File exists" 00:26:56.094 } 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@140 -- # get_discovery_ctrlrs 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # jq -r '.[].name' 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # sort 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # xargs 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@140 -- # [[ nvme == \n\v\m\e ]] 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # get_bdev_list 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:56.094 09:11:11 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:56.094 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:26:56.094 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.094 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.094 request: 00:26:56.094 { 00:26:56.094 "name": "nvme_second", 00:26:56.094 "trtype": "tcp", 00:26:56.094 "traddr": "10.0.0.2", 00:26:56.094 "adrfam": "ipv4", 00:26:56.094 "trsvcid": "8009", 00:26:56.094 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:56.094 "wait_for_attach": true, 00:26:56.094 "method": "bdev_nvme_start_discovery", 00:26:56.094 "req_id": 1 00:26:56.094 } 00:26:56.094 Got JSON-RPC error response 00:26:56.094 response: 00:26:56.094 { 00:26:56.094 "code": -17, 00:26:56.094 "message": "File exists" 00:26:56.094 } 00:26:56.094 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:56.094 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:56.094 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:56.094 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:56.094 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:56.094 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:26:56.094 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:56.094 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # jq -r '.[].name' 00:26:56.094 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # xargs 00:26:56.094 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.094 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # sort 00:26:56.095 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.095 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.095 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:26:56.095 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@147 -- # get_bdev_list 00:26:56.095 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:26:56.095 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:26:56.095 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.095 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:26:56.095 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:56.095 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:26:56.095 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:56.095 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:26:56.095 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:56.095 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:26:56.095 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:56.095 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:26:56.095 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:56.095 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:26:56.095 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:56.095 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:26:56.095 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:56.095 09:11:12 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:57.472 [2024-11-20 09:11:13.122397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:57.472 [2024-11-20 09:11:13.122424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a58940 with addr=10.0.0.2, port=8010 00:26:57.472 [2024-11-20 09:11:13.122435] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:57.472 [2024-11-20 09:11:13.122441] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:57.472 [2024-11-20 09:11:13.122447] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:58.408 [2024-11-20 09:11:14.124790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:58.408 [2024-11-20 09:11:14.124814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1a58940 with addr=10.0.0.2, port=8010 00:26:58.408 [2024-11-20 09:11:14.124826] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:58.408 [2024-11-20 09:11:14.124832] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:58.408 [2024-11-20 09:11:14.124837] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:26:59.346 [2024-11-20 09:11:15.127031] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:26:59.346 request: 00:26:59.346 { 00:26:59.346 "name": "nvme_second", 00:26:59.346 "trtype": "tcp", 00:26:59.346 "traddr": "10.0.0.2", 00:26:59.346 "adrfam": "ipv4", 00:26:59.346 "trsvcid": "8010", 00:26:59.346 "hostnqn": "nqn.2021-12.io.spdk:test", 00:26:59.346 "wait_for_attach": false, 00:26:59.346 "attach_timeout_ms": 3000, 00:26:59.346 "method": "bdev_nvme_start_discovery", 00:26:59.346 "req_id": 1 00:26:59.346 } 00:26:59.346 Got JSON-RPC error response 00:26:59.346 response: 00:26:59.346 { 00:26:59.346 "code": -110, 00:26:59.346 "message": "Connection timed out" 00:26:59.346 } 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # jq -r '.[].name' 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # sort 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # xargs 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@154 -- # trap - SIGINT SIGTERM EXIT 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@156 -- # kill 2492881 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # nvmftestfini 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@99 -- # sync 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@102 -- # set +e 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:59.346 rmmod nvme_tcp 00:26:59.346 rmmod nvme_fabrics 00:26:59.346 rmmod nvme_keyring 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@106 -- # set -e 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@107 -- # return 0 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # '[' -n 2492856 ']' 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@337 -- # killprocess 2492856 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 2492856 ']' 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 2492856 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2492856 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2492856' 00:26:59.346 killing process with pid 2492856 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 2492856 00:26:59.346 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 2492856 00:26:59.607 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:59.607 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # nvmf_fini 00:26:59.607 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@264 -- # local dev 00:26:59.607 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@267 -- # remove_target_ns 00:26:59.607 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:59.607 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:59.607 09:11:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:01.512 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@268 -- # delete_main_bridge 00:27:01.512 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:01.512 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@130 -- # return 0 00:27:01.512 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:27:01.512 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:27:01.512 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:27:01.512 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:27:01.513 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:27:01.513 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:27:01.513 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:27:01.513 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:27:01.513 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:27:01.513 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:27:01.513 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:27:01.513 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:27:01.513 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:27:01.513 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:27:01.513 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:27:01.513 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:27:01.513 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:27:01.513 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@41 -- # _dev=0 00:27:01.513 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@41 -- # dev_map=() 00:27:01.513 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@284 -- # iptr 00:27:01.513 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@542 -- # iptables-save 00:27:01.513 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:27:01.513 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@542 -- # iptables-restore 00:27:01.513 00:27:01.513 real 0m17.351s 00:27:01.513 user 0m20.729s 00:27:01.513 sys 0m5.770s 00:27:01.513 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:01.513 09:11:17 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:01.513 ************************************ 00:27:01.513 END TEST nvmf_host_discovery 00:27:01.513 ************************************ 00:27:01.772 09:11:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@34 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:01.772 09:11:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:01.772 09:11:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:01.772 09:11:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:01.772 ************************************ 00:27:01.772 START TEST nvmf_discovery_remove_ifc 00:27:01.772 ************************************ 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:01.773 * Looking for test storage... 00:27:01.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:01.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.773 --rc genhtml_branch_coverage=1 00:27:01.773 --rc genhtml_function_coverage=1 00:27:01.773 --rc genhtml_legend=1 00:27:01.773 --rc geninfo_all_blocks=1 00:27:01.773 --rc geninfo_unexecuted_blocks=1 00:27:01.773 00:27:01.773 ' 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:01.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.773 --rc genhtml_branch_coverage=1 00:27:01.773 --rc genhtml_function_coverage=1 00:27:01.773 --rc genhtml_legend=1 00:27:01.773 --rc geninfo_all_blocks=1 00:27:01.773 --rc geninfo_unexecuted_blocks=1 00:27:01.773 00:27:01.773 ' 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:01.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.773 --rc genhtml_branch_coverage=1 00:27:01.773 --rc genhtml_function_coverage=1 00:27:01.773 --rc genhtml_legend=1 00:27:01.773 --rc geninfo_all_blocks=1 00:27:01.773 --rc geninfo_unexecuted_blocks=1 00:27:01.773 00:27:01.773 ' 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:01.773 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:01.773 --rc genhtml_branch_coverage=1 00:27:01.773 --rc genhtml_function_coverage=1 00:27:01.773 --rc genhtml_legend=1 00:27:01.773 --rc geninfo_all_blocks=1 00:27:01.773 --rc geninfo_unexecuted_blocks=1 00:27:01.773 00:27:01.773 ' 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@50 -- # : 0 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:27:01.773 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:27:01.774 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:01.774 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:01.774 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:27:01.774 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:27:01.774 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:27:01.774 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:27:01.774 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@54 -- # have_pci_nics=0 00:27:01.774 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # discovery_port=8009 00:27:01.774 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:01.774 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@18 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:01.774 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:01.774 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@21 -- # host_sock=/tmp/host.sock 00:27:01.774 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # nvmftestinit 00:27:01.774 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:01.774 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:01.774 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:01.774 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:01.774 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # remove_target_ns 00:27:01.774 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:01.774 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:01.774 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:01.774 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:27:01.774 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:27:01.774 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # xtrace_disable 00:27:01.774 09:11:17 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@131 -- # pci_devs=() 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@131 -- # local -a pci_devs 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@132 -- # pci_net_devs=() 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@133 -- # pci_drivers=() 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@133 -- # local -A pci_drivers 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@135 -- # net_devs=() 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@135 -- # local -ga net_devs 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@136 -- # e810=() 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@136 -- # local -ga e810 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@137 -- # x722=() 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@137 -- # local -ga x722 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@138 -- # mlx=() 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@138 -- # local -ga mlx 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:08.350 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:08.350 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:08.351 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:08.351 Found net devices under 0000:86:00.0: cvl_0_0 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:08.351 Found net devices under 0000:86:00.1: cvl_0_1 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # is_hw=yes 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@257 -- # create_target_ns 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@28 -- # local -g _dev 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # ips=() 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772161 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:27:08.351 10.0.0.1 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772162 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:27:08.351 10.0.0.2 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:27:08.351 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@38 -- # ping_ips 1 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=initiator0 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:08.352 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:08.352 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.442 ms 00:27:08.352 00:27:08.352 --- 10.0.0.1 ping statistics --- 00:27:08.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:08.352 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev target0 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=target0 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:27:08.352 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:08.352 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:27:08.352 00:27:08.352 --- 10.0.0.2 ping statistics --- 00:27:08.352 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:08.352 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # (( pair++ )) 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # return 0 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=initiator0 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=initiator1 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # return 1 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev= 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@169 -- # return 0 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:27:08.352 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev target0 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=target0 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev target1 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=target1 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # return 1 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev= 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@169 -- # return 0 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@35 -- # nvmfappstart -m 0x2 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # nvmfpid=2497977 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # waitforlisten 2497977 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2497977 ']' 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:08.353 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:08.353 09:11:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:08.353 [2024-11-20 09:11:23.927149] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:27:08.353 [2024-11-20 09:11:23.927195] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:08.353 [2024-11-20 09:11:24.006480] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.353 [2024-11-20 09:11:24.047186] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:08.353 [2024-11-20 09:11:24.047222] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:08.353 [2024-11-20 09:11:24.047229] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:08.353 [2024-11-20 09:11:24.047235] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:08.353 [2024-11-20 09:11:24.047240] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:08.353 [2024-11-20 09:11:24.047775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:08.353 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:08.353 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:08.353 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:27:08.353 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:08.353 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:08.353 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:08.353 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@38 -- # rpc_cmd 00:27:08.353 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.353 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:08.353 [2024-11-20 09:11:24.194850] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:08.353 [2024-11-20 09:11:24.203033] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:08.353 null0 00:27:08.353 [2024-11-20 09:11:24.235022] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:08.353 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.353 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@54 -- # hostpid=2498026 00:27:08.353 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:08.353 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@55 -- # waitforlisten 2498026 /tmp/host.sock 00:27:08.353 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 2498026 ']' 00:27:08.353 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:27:08.353 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:08.353 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:08.353 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:08.353 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:08.353 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:08.353 [2024-11-20 09:11:24.302674] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:27:08.353 [2024-11-20 09:11:24.302716] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2498026 ] 00:27:08.353 [2024-11-20 09:11:24.375302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:08.613 [2024-11-20 09:11:24.419464] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:08.613 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:08.613 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:08.613 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@57 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:08.613 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:08.613 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.613 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:08.613 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.613 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@61 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:08.613 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.613 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:08.613 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:08.613 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:08.613 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:08.613 09:11:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:09.550 [2024-11-20 09:11:25.561264] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:09.550 [2024-11-20 09:11:25.561283] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:09.550 [2024-11-20 09:11:25.561298] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:09.809 [2024-11-20 09:11:25.687696] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:09.809 [2024-11-20 09:11:25.782434] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:27:09.809 [2024-11-20 09:11:25.783184] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0xc949f0:1 started. 00:27:09.810 [2024-11-20 09:11:25.784543] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:09.810 [2024-11-20 09:11:25.784584] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:09.810 [2024-11-20 09:11:25.784604] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:09.810 [2024-11-20 09:11:25.784617] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:09.810 [2024-11-20 09:11:25.784634] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:09.810 09:11:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.810 09:11:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@67 -- # wait_for_bdev nvme0n1 00:27:09.810 09:11:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:09.810 09:11:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:09.810 09:11:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:09.810 09:11:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:09.810 09:11:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:09.810 09:11:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:09.810 09:11:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:09.810 09:11:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:09.810 [2024-11-20 09:11:25.830491] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0xc949f0 was disconnected and freed. delete nvme_qpair. 00:27:09.810 09:11:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:09.810 09:11:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@70 -- # ip netns exec nvmf_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_1 00:27:09.810 09:11:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@71 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 down 00:27:10.069 09:11:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@74 -- # wait_for_bdev '' 00:27:10.069 09:11:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:10.069 09:11:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:10.069 09:11:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:10.069 09:11:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.069 09:11:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:10.069 09:11:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:10.069 09:11:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:10.069 09:11:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.069 09:11:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:27:10.069 09:11:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:27:11.005 09:11:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:11.005 09:11:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:11.005 09:11:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.005 09:11:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:11.005 09:11:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:11.005 09:11:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:11.005 09:11:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:11.005 09:11:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.005 09:11:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:27:11.005 09:11:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:27:12.379 09:11:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:12.379 09:11:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:12.379 09:11:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:12.379 09:11:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:12.379 09:11:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:12.379 09:11:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:12.379 09:11:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:12.379 09:11:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:12.379 09:11:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:27:12.379 09:11:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:27:13.361 09:11:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:13.361 09:11:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:13.361 09:11:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:13.361 09:11:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:13.361 09:11:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:13.361 09:11:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:13.361 09:11:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:13.361 09:11:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:13.361 09:11:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:27:13.361 09:11:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:27:14.378 09:11:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:14.378 09:11:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:14.378 09:11:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:14.378 09:11:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:14.378 09:11:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:14.378 09:11:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:14.378 09:11:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:14.378 09:11:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:14.378 09:11:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:27:14.378 09:11:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:27:15.315 09:11:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:15.315 09:11:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:15.315 09:11:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:15.315 09:11:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:15.315 09:11:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:15.315 09:11:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:15.315 09:11:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:15.315 09:11:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:15.315 09:11:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:27:15.315 09:11:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:27:15.315 [2024-11-20 09:11:31.226172] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:15.315 [2024-11-20 09:11:31.226215] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.315 [2024-11-20 09:11:31.226242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.315 [2024-11-20 09:11:31.226253] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.315 [2024-11-20 09:11:31.226259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.315 [2024-11-20 09:11:31.226267] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.315 [2024-11-20 09:11:31.226274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.315 [2024-11-20 09:11:31.226282] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.315 [2024-11-20 09:11:31.226289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.315 [2024-11-20 09:11:31.226297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:15.315 [2024-11-20 09:11:31.226303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:15.315 [2024-11-20 09:11:31.226310] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc71220 is same with the state(6) to be set 00:27:15.315 [2024-11-20 09:11:31.236195] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc71220 (9): Bad file descriptor 00:27:15.315 [2024-11-20 09:11:31.246227] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:15.315 [2024-11-20 09:11:31.246239] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:15.315 [2024-11-20 09:11:31.246244] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:15.315 [2024-11-20 09:11:31.246249] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:15.315 [2024-11-20 09:11:31.246271] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:16.250 09:11:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:16.250 09:11:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:16.250 09:11:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:16.250 09:11:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:16.250 09:11:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:16.250 09:11:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:16.250 09:11:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:16.250 [2024-11-20 09:11:32.272001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:16.250 [2024-11-20 09:11:32.272089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xc71220 with addr=10.0.0.2, port=4420 00:27:16.250 [2024-11-20 09:11:32.272122] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xc71220 is same with the state(6) to be set 00:27:16.250 [2024-11-20 09:11:32.272191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc71220 (9): Bad file descriptor 00:27:16.250 [2024-11-20 09:11:32.273155] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:27:16.250 [2024-11-20 09:11:32.273218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:16.250 [2024-11-20 09:11:32.273242] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:16.250 [2024-11-20 09:11:32.273265] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:16.250 [2024-11-20 09:11:32.273286] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:16.250 [2024-11-20 09:11:32.273302] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:16.250 [2024-11-20 09:11:32.273316] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:16.250 [2024-11-20 09:11:32.273339] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:16.250 [2024-11-20 09:11:32.273353] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:16.508 09:11:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:16.508 09:11:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:27:16.508 09:11:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:27:17.442 [2024-11-20 09:11:33.275875] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:17.442 [2024-11-20 09:11:33.275897] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:17.442 [2024-11-20 09:11:33.275909] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:17.442 [2024-11-20 09:11:33.275915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:17.442 [2024-11-20 09:11:33.275923] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:27:17.442 [2024-11-20 09:11:33.275929] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:17.442 [2024-11-20 09:11:33.275933] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:17.442 [2024-11-20 09:11:33.275938] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:17.442 [2024-11-20 09:11:33.275962] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:17.442 [2024-11-20 09:11:33.275984] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.442 [2024-11-20 09:11:33.275993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.442 [2024-11-20 09:11:33.276003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.442 [2024-11-20 09:11:33.276010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.442 [2024-11-20 09:11:33.276017] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.442 [2024-11-20 09:11:33.276024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.442 [2024-11-20 09:11:33.276031] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.442 [2024-11-20 09:11:33.276041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.442 [2024-11-20 09:11:33.276049] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:17.442 [2024-11-20 09:11:33.276055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:17.442 [2024-11-20 09:11:33.276062] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:27:17.442 [2024-11-20 09:11:33.276455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc60900 (9): Bad file descriptor 00:27:17.442 [2024-11-20 09:11:33.277467] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:17.442 [2024-11-20 09:11:33.277478] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:27:17.442 09:11:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:17.442 09:11:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:17.442 09:11:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:17.442 09:11:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.442 09:11:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:17.442 09:11:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:17.442 09:11:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:17.442 09:11:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.442 09:11:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != '' ]] 00:27:17.442 09:11:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@77 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:27:17.442 09:11:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@78 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:27:17.442 09:11:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@81 -- # wait_for_bdev nvme1n1 00:27:17.442 09:11:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:17.442 09:11:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:17.442 09:11:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:17.442 09:11:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:17.442 09:11:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:17.442 09:11:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:17.442 09:11:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:17.442 09:11:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:17.442 09:11:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:17.442 09:11:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:27:18.818 09:11:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:18.818 09:11:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:18.818 09:11:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:18.818 09:11:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:18.818 09:11:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:18.818 09:11:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:18.818 09:11:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:18.818 09:11:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:18.818 09:11:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:18.818 09:11:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:27:19.384 [2024-11-20 09:11:35.328097] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:19.384 [2024-11-20 09:11:35.328116] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:19.384 [2024-11-20 09:11:35.328130] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:19.384 [2024-11-20 09:11:35.416401] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:19.642 [2024-11-20 09:11:35.517157] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:27:19.642 [2024-11-20 09:11:35.517796] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0xc70080:1 started. 00:27:19.642 [2024-11-20 09:11:35.518858] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:19.642 [2024-11-20 09:11:35.518890] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:19.642 [2024-11-20 09:11:35.518906] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:19.642 [2024-11-20 09:11:35.518919] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:19.642 [2024-11-20 09:11:35.518927] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:19.642 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:19.642 [2024-11-20 09:11:35.525303] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0xc70080 was disconnected and freed. delete nvme_qpair. 00:27:19.642 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:19.643 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:19.643 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:19.643 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:19.643 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:19.643 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:19.643 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:19.643 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:19.643 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:19.643 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@85 -- # killprocess 2498026 00:27:19.643 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2498026 ']' 00:27:19.643 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2498026 00:27:19.643 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:19.643 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:19.643 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2498026 00:27:19.643 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:19.643 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:19.643 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2498026' 00:27:19.643 killing process with pid 2498026 00:27:19.643 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2498026 00:27:19.643 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2498026 00:27:19.901 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # nvmftestfini 00:27:19.901 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # nvmfcleanup 00:27:19.901 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@99 -- # sync 00:27:19.901 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:27:19.901 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@102 -- # set +e 00:27:19.901 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@103 -- # for i in {1..20} 00:27:19.901 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:27:19.901 rmmod nvme_tcp 00:27:19.901 rmmod nvme_fabrics 00:27:19.901 rmmod nvme_keyring 00:27:19.901 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:27:19.902 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@106 -- # set -e 00:27:19.902 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@107 -- # return 0 00:27:19.902 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # '[' -n 2497977 ']' 00:27:19.902 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@337 -- # killprocess 2497977 00:27:19.902 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 2497977 ']' 00:27:19.902 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 2497977 00:27:19.902 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:19.902 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:19.902 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2497977 00:27:19.902 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:19.902 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:19.902 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2497977' 00:27:19.902 killing process with pid 2497977 00:27:19.902 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 2497977 00:27:19.902 09:11:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 2497977 00:27:20.160 09:11:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:27:20.160 09:11:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # nvmf_fini 00:27:20.160 09:11:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@264 -- # local dev 00:27:20.160 09:11:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@267 -- # remove_target_ns 00:27:20.161 09:11:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:20.161 09:11:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:20.161 09:11:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:22.068 09:11:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@268 -- # delete_main_bridge 00:27:22.068 09:11:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:22.068 09:11:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@130 -- # return 0 00:27:22.068 09:11:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:27:22.068 09:11:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:27:22.068 09:11:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:27:22.068 09:11:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:27:22.068 09:11:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:27:22.068 09:11:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:27:22.068 09:11:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:27:22.068 09:11:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:27:22.068 09:11:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:27:22.068 09:11:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:27:22.068 09:11:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:27:22.068 09:11:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:27:22.068 09:11:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:27:22.068 09:11:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:27:22.068 09:11:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:27:22.068 09:11:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:27:22.068 09:11:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:27:22.068 09:11:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # _dev=0 00:27:22.068 09:11:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # dev_map=() 00:27:22.068 09:11:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@284 -- # iptr 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@542 -- # iptables-save 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@542 -- # iptables-restore 00:27:22.327 00:27:22.327 real 0m20.515s 00:27:22.327 user 0m24.644s 00:27:22.327 sys 0m5.842s 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:22.327 ************************************ 00:27:22.327 END TEST nvmf_discovery_remove_ifc 00:27:22.327 ************************************ 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@35 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:22.327 ************************************ 00:27:22.327 START TEST nvmf_multicontroller 00:27:22.327 ************************************ 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:22.327 * Looking for test storage... 00:27:22.327 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:22.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.327 --rc genhtml_branch_coverage=1 00:27:22.327 --rc genhtml_function_coverage=1 00:27:22.327 --rc genhtml_legend=1 00:27:22.327 --rc geninfo_all_blocks=1 00:27:22.327 --rc geninfo_unexecuted_blocks=1 00:27:22.327 00:27:22.327 ' 00:27:22.327 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:22.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.327 --rc genhtml_branch_coverage=1 00:27:22.327 --rc genhtml_function_coverage=1 00:27:22.327 --rc genhtml_legend=1 00:27:22.327 --rc geninfo_all_blocks=1 00:27:22.327 --rc geninfo_unexecuted_blocks=1 00:27:22.327 00:27:22.328 ' 00:27:22.328 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:22.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.328 --rc genhtml_branch_coverage=1 00:27:22.328 --rc genhtml_function_coverage=1 00:27:22.328 --rc genhtml_legend=1 00:27:22.328 --rc geninfo_all_blocks=1 00:27:22.328 --rc geninfo_unexecuted_blocks=1 00:27:22.328 00:27:22.328 ' 00:27:22.328 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:22.328 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:22.328 --rc genhtml_branch_coverage=1 00:27:22.328 --rc genhtml_function_coverage=1 00:27:22.328 --rc genhtml_legend=1 00:27:22.328 --rc geninfo_all_blocks=1 00:27:22.328 --rc geninfo_unexecuted_blocks=1 00:27:22.328 00:27:22.328 ' 00:27:22.328 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:22.328 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:22.328 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:22.328 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@50 -- # : 0 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:27:22.587 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@54 -- # have_pci_nics=0 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # nvmftestinit 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # remove_target_ns 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:22.587 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:22.588 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:27:22.588 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:27:22.588 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # xtrace_disable 00:27:22.588 09:11:38 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:29.165 09:11:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:29.165 09:11:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@131 -- # pci_devs=() 00:27:29.165 09:11:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@131 -- # local -a pci_devs 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@132 -- # pci_net_devs=() 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@133 -- # pci_drivers=() 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@133 -- # local -A pci_drivers 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@135 -- # net_devs=() 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@135 -- # local -ga net_devs 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@136 -- # e810=() 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@136 -- # local -ga e810 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@137 -- # x722=() 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@137 -- # local -ga x722 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@138 -- # mlx=() 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@138 -- # local -ga mlx 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:29.165 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:29.165 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:27:29.165 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:29.166 Found net devices under 0000:86:00.0: cvl_0_0 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:29.166 Found net devices under 0000:86:00.1: cvl_0_1 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # is_hw=yes 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@257 -- # create_target_ns 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@28 -- # local -g _dev 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@44 -- # ips=() 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@11 -- # local val=167772161 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:27:29.166 10.0.0.1 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@11 -- # local val=167772162 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:27:29.166 10.0.0.2 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:29.166 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@38 -- # ping_ips 1 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@107 -- # local dev=initiator0 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:29.167 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:29.167 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.465 ms 00:27:29.167 00:27:29.167 --- 10.0.0.1 ping statistics --- 00:27:29.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.167 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # get_net_dev target0 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@107 -- # local dev=target0 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:27:29.167 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:29.167 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.159 ms 00:27:29.167 00:27:29.167 --- 10.0.0.2 ping statistics --- 00:27:29.167 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:29.167 rtt min/avg/max/mdev = 0.159/0.159/0.159/0.000 ms 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # (( pair++ )) 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # return 0 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@107 -- # local dev=initiator0 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@107 -- # local dev=initiator1 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # return 1 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # dev= 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@169 -- # return 0 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # get_net_dev target0 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@107 -- # local dev=target0 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:27:29.167 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # get_net_dev target1 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@107 -- # local dev=target1 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # return 1 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # dev= 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@169 -- # return 0 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # nvmfappstart -m 0xE 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # nvmfpid=2503503 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # waitforlisten 2503503 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2503503 ']' 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:29.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:29.168 [2024-11-20 09:11:44.483645] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:27:29.168 [2024-11-20 09:11:44.483699] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:29.168 [2024-11-20 09:11:44.563494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:29.168 [2024-11-20 09:11:44.605934] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:29.168 [2024-11-20 09:11:44.605977] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:29.168 [2024-11-20 09:11:44.605985] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:29.168 [2024-11-20 09:11:44.605991] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:29.168 [2024-11-20 09:11:44.605995] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:29.168 [2024-11-20 09:11:44.607451] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:29.168 [2024-11-20 09:11:44.607559] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.168 [2024-11-20 09:11:44.607560] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:29.168 [2024-11-20 09:11:44.738835] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:29.168 Malloc0 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:29.168 [2024-11-20 09:11:44.804176] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:29.168 [2024-11-20 09:11:44.812098] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:29.168 Malloc1 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@32 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@39 -- # bdevperf_pid=2503703 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@42 -- # waitforlisten 2503703 /var/tmp/bdevperf.sock 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 2503703 ']' 00:27:29.168 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:27:29.169 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:29.169 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:27:29.169 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:27:29.169 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:29.169 09:11:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:29.169 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:29.169 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:27:29.169 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@45 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:27:29.169 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.169 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:29.169 NVMe0n1 00:27:29.169 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.169 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@49 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:29.169 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@49 -- # grep -c NVMe 00:27:29.169 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.169 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:29.428 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.428 1 00:27:29.428 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@55 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:27:29.428 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:27:29.428 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:27:29.428 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:29.428 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:29.428 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:29.428 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:29.428 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:27:29.428 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.428 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:29.428 request: 00:27:29.428 { 00:27:29.428 "name": "NVMe0", 00:27:29.428 "trtype": "tcp", 00:27:29.428 "traddr": "10.0.0.2", 00:27:29.428 "adrfam": "ipv4", 00:27:29.428 "trsvcid": "4420", 00:27:29.428 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:29.428 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:27:29.428 "hostaddr": "10.0.0.1", 00:27:29.428 "prchk_reftag": false, 00:27:29.428 "prchk_guard": false, 00:27:29.428 "hdgst": false, 00:27:29.428 "ddgst": false, 00:27:29.428 "allow_unrecognized_csi": false, 00:27:29.428 "method": "bdev_nvme_attach_controller", 00:27:29.428 "req_id": 1 00:27:29.428 } 00:27:29.429 Got JSON-RPC error response 00:27:29.429 response: 00:27:29.429 { 00:27:29.429 "code": -114, 00:27:29.429 "message": "A controller named NVMe0 already exists with the specified network path" 00:27:29.429 } 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:29.429 request: 00:27:29.429 { 00:27:29.429 "name": "NVMe0", 00:27:29.429 "trtype": "tcp", 00:27:29.429 "traddr": "10.0.0.2", 00:27:29.429 "adrfam": "ipv4", 00:27:29.429 "trsvcid": "4420", 00:27:29.429 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:27:29.429 "hostaddr": "10.0.0.1", 00:27:29.429 "prchk_reftag": false, 00:27:29.429 "prchk_guard": false, 00:27:29.429 "hdgst": false, 00:27:29.429 "ddgst": false, 00:27:29.429 "allow_unrecognized_csi": false, 00:27:29.429 "method": "bdev_nvme_attach_controller", 00:27:29.429 "req_id": 1 00:27:29.429 } 00:27:29.429 Got JSON-RPC error response 00:27:29.429 response: 00:27:29.429 { 00:27:29.429 "code": -114, 00:27:29.429 "message": "A controller named NVMe0 already exists with the specified network path" 00:27:29.429 } 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@64 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:29.429 request: 00:27:29.429 { 00:27:29.429 "name": "NVMe0", 00:27:29.429 "trtype": "tcp", 00:27:29.429 "traddr": "10.0.0.2", 00:27:29.429 "adrfam": "ipv4", 00:27:29.429 "trsvcid": "4420", 00:27:29.429 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:29.429 "hostaddr": "10.0.0.1", 00:27:29.429 "prchk_reftag": false, 00:27:29.429 "prchk_guard": false, 00:27:29.429 "hdgst": false, 00:27:29.429 "ddgst": false, 00:27:29.429 "multipath": "disable", 00:27:29.429 "allow_unrecognized_csi": false, 00:27:29.429 "method": "bdev_nvme_attach_controller", 00:27:29.429 "req_id": 1 00:27:29.429 } 00:27:29.429 Got JSON-RPC error response 00:27:29.429 response: 00:27:29.429 { 00:27:29.429 "code": -114, 00:27:29.429 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:27:29.429 } 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:29.429 request: 00:27:29.429 { 00:27:29.429 "name": "NVMe0", 00:27:29.429 "trtype": "tcp", 00:27:29.429 "traddr": "10.0.0.2", 00:27:29.429 "adrfam": "ipv4", 00:27:29.429 "trsvcid": "4420", 00:27:29.429 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:27:29.429 "hostaddr": "10.0.0.1", 00:27:29.429 "prchk_reftag": false, 00:27:29.429 "prchk_guard": false, 00:27:29.429 "hdgst": false, 00:27:29.429 "ddgst": false, 00:27:29.429 "multipath": "failover", 00:27:29.429 "allow_unrecognized_csi": false, 00:27:29.429 "method": "bdev_nvme_attach_controller", 00:27:29.429 "req_id": 1 00:27:29.429 } 00:27:29.429 Got JSON-RPC error response 00:27:29.429 response: 00:27:29.429 { 00:27:29.429 "code": -114, 00:27:29.429 "message": "A controller named NVMe0 already exists with the specified network path" 00:27:29.429 } 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:29.429 NVMe0n1 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@78 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@82 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.429 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:29.688 00:27:29.688 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.688 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@85 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:27:29.688 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@85 -- # grep -c NVMe 00:27:29.688 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.688 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:29.688 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.688 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@85 -- # '[' 2 '!=' 2 ']' 00:27:29.688 09:11:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:27:30.624 { 00:27:30.624 "results": [ 00:27:30.624 { 00:27:30.624 "job": "NVMe0n1", 00:27:30.624 "core_mask": "0x1", 00:27:30.624 "workload": "write", 00:27:30.624 "status": "finished", 00:27:30.624 "queue_depth": 128, 00:27:30.624 "io_size": 4096, 00:27:30.624 "runtime": 1.004449, 00:27:30.624 "iops": 24234.18212373152, 00:27:30.624 "mibps": 94.66477392082625, 00:27:30.624 "io_failed": 0, 00:27:30.624 "io_timeout": 0, 00:27:30.624 "avg_latency_us": 5275.373443788336, 00:27:30.624 "min_latency_us": 1481.6834782608696, 00:27:30.624 "max_latency_us": 9175.04 00:27:30.624 } 00:27:30.624 ], 00:27:30.624 "core_count": 1 00:27:30.624 } 00:27:30.624 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@93 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:27:30.624 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.624 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:30.624 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.624 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # [[ -n '' ]] 00:27:30.624 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@111 -- # killprocess 2503703 00:27:30.624 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2503703 ']' 00:27:30.624 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2503703 00:27:30.624 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:27:30.624 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:30.624 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2503703 00:27:30.883 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:30.883 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:30.883 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2503703' 00:27:30.883 killing process with pid 2503703 00:27:30.883 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2503703 00:27:30.883 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2503703 00:27:30.883 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:30.883 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.883 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:30.883 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.883 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@114 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:27:30.883 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:30.883 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:30.883 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:30.883 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:27:30.883 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:30.883 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:27:30.883 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:27:30.883 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:27:30.883 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:27:30.883 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:30.883 [2024-11-20 09:11:44.916709] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:27:30.883 [2024-11-20 09:11:44.916763] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2503703 ] 00:27:30.883 [2024-11-20 09:11:44.989917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:30.883 [2024-11-20 09:11:45.032648] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:30.883 [2024-11-20 09:11:45.488069] bdev.c:4700:bdev_name_add: *ERROR*: Bdev name ec452695-9b98-4e0f-9344-594b442320d7 already exists 00:27:30.883 [2024-11-20 09:11:45.488097] bdev.c:7838:bdev_register: *ERROR*: Unable to add uuid:ec452695-9b98-4e0f-9344-594b442320d7 alias for bdev NVMe1n1 00:27:30.883 [2024-11-20 09:11:45.488105] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:27:30.883 Running I/O for 1 seconds... 00:27:30.883 24214.00 IOPS, 94.59 MiB/s 00:27:30.883 Latency(us) 00:27:30.883 [2024-11-20T08:11:46.924Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:30.883 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:27:30.883 NVMe0n1 : 1.00 24234.18 94.66 0.00 0.00 5275.37 1481.68 9175.04 00:27:30.883 [2024-11-20T08:11:46.924Z] =================================================================================================================== 00:27:30.883 [2024-11-20T08:11:46.924Z] Total : 24234.18 94.66 0.00 0.00 5275.37 1481.68 9175.04 00:27:30.883 Received shutdown signal, test time was about 1.000000 seconds 00:27:30.883 00:27:30.883 Latency(us) 00:27:30.883 [2024-11-20T08:11:46.924Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:30.883 [2024-11-20T08:11:46.924Z] =================================================================================================================== 00:27:30.883 [2024-11-20T08:11:46.924Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:30.883 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:27:30.883 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:27:30.883 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:27:30.883 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # nvmftestfini 00:27:30.883 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # nvmfcleanup 00:27:30.883 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@99 -- # sync 00:27:30.883 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:27:30.883 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@102 -- # set +e 00:27:30.883 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@103 -- # for i in {1..20} 00:27:30.883 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:27:30.883 rmmod nvme_tcp 00:27:30.883 rmmod nvme_fabrics 00:27:30.883 rmmod nvme_keyring 00:27:31.142 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:27:31.142 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@106 -- # set -e 00:27:31.142 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@107 -- # return 0 00:27:31.142 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # '[' -n 2503503 ']' 00:27:31.142 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@337 -- # killprocess 2503503 00:27:31.142 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 2503503 ']' 00:27:31.142 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 2503503 00:27:31.142 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:27:31.142 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:31.142 09:11:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2503503 00:27:31.142 09:11:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:31.142 09:11:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:31.142 09:11:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2503503' 00:27:31.142 killing process with pid 2503503 00:27:31.142 09:11:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 2503503 00:27:31.142 09:11:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 2503503 00:27:31.401 09:11:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:27:31.401 09:11:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # nvmf_fini 00:27:31.401 09:11:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@264 -- # local dev 00:27:31.401 09:11:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@267 -- # remove_target_ns 00:27:31.401 09:11:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:31.401 09:11:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:31.401 09:11:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:33.318 09:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@268 -- # delete_main_bridge 00:27:33.318 09:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:33.318 09:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@130 -- # return 0 00:27:33.318 09:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:27:33.318 09:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:27:33.318 09:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:27:33.318 09:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:27:33.318 09:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:27:33.318 09:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:27:33.318 09:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:27:33.318 09:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:27:33.318 09:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:27:33.318 09:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:27:33.318 09:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:27:33.318 09:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:27:33.318 09:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:27:33.318 09:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:27:33.318 09:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:27:33.318 09:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:27:33.318 09:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:27:33.318 09:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@41 -- # _dev=0 00:27:33.318 09:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@41 -- # dev_map=() 00:27:33.319 09:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@284 -- # iptr 00:27:33.319 09:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@542 -- # iptables-save 00:27:33.319 09:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:27:33.319 09:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@542 -- # iptables-restore 00:27:33.319 00:27:33.319 real 0m11.094s 00:27:33.319 user 0m11.652s 00:27:33.319 sys 0m5.233s 00:27:33.319 09:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:33.319 09:11:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:27:33.319 ************************************ 00:27:33.319 END TEST nvmf_multicontroller 00:27:33.319 ************************************ 00:27:33.319 09:11:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@38 -- # [[ tcp == \r\d\m\a ]] 00:27:33.319 09:11:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@42 -- # [[ 0 -eq 1 ]] 00:27:33.319 09:11:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # [[ 0 -eq 1 ]] 00:27:33.319 09:11:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@53 -- # trap - SIGINT SIGTERM EXIT 00:27:33.319 00:27:33.319 real 6m14.768s 00:27:33.319 user 11m3.181s 00:27:33.319 sys 2m4.834s 00:27:33.319 09:11:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:33.319 09:11:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:33.319 ************************************ 00:27:33.319 END TEST nvmf_host 00:27:33.319 ************************************ 00:27:33.319 09:11:49 nvmf_tcp -- nvmf/nvmf.sh@15 -- # [[ tcp = \t\c\p ]] 00:27:33.319 09:11:49 nvmf_tcp -- nvmf/nvmf.sh@15 -- # [[ 0 -eq 0 ]] 00:27:33.319 09:11:49 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:33.319 09:11:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:33.319 09:11:49 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:33.319 09:11:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:33.585 ************************************ 00:27:33.585 START TEST nvmf_target_core_interrupt_mode 00:27:33.585 ************************************ 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:27:33.585 * Looking for test storage... 00:27:33.585 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:33.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.585 --rc genhtml_branch_coverage=1 00:27:33.585 --rc genhtml_function_coverage=1 00:27:33.585 --rc genhtml_legend=1 00:27:33.585 --rc geninfo_all_blocks=1 00:27:33.585 --rc geninfo_unexecuted_blocks=1 00:27:33.585 00:27:33.585 ' 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:33.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.585 --rc genhtml_branch_coverage=1 00:27:33.585 --rc genhtml_function_coverage=1 00:27:33.585 --rc genhtml_legend=1 00:27:33.585 --rc geninfo_all_blocks=1 00:27:33.585 --rc geninfo_unexecuted_blocks=1 00:27:33.585 00:27:33.585 ' 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:33.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.585 --rc genhtml_branch_coverage=1 00:27:33.585 --rc genhtml_function_coverage=1 00:27:33.585 --rc genhtml_legend=1 00:27:33.585 --rc geninfo_all_blocks=1 00:27:33.585 --rc geninfo_unexecuted_blocks=1 00:27:33.585 00:27:33.585 ' 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:33.585 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.585 --rc genhtml_branch_coverage=1 00:27:33.585 --rc genhtml_function_coverage=1 00:27:33.585 --rc genhtml_legend=1 00:27:33.585 --rc geninfo_all_blocks=1 00:27:33.585 --rc geninfo_unexecuted_blocks=1 00:27:33.585 00:27:33.585 ' 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:33.585 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@50 -- # : 0 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@54 -- # have_pci_nics=0 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@13 -- # TEST_ARGS=("$@") 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@15 -- # [[ 0 -eq 0 ]] 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:33.586 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:33.846 ************************************ 00:27:33.847 START TEST nvmf_abort 00:27:33.847 ************************************ 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:27:33.847 * Looking for test storage... 00:27:33.847 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:33.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.847 --rc genhtml_branch_coverage=1 00:27:33.847 --rc genhtml_function_coverage=1 00:27:33.847 --rc genhtml_legend=1 00:27:33.847 --rc geninfo_all_blocks=1 00:27:33.847 --rc geninfo_unexecuted_blocks=1 00:27:33.847 00:27:33.847 ' 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:33.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.847 --rc genhtml_branch_coverage=1 00:27:33.847 --rc genhtml_function_coverage=1 00:27:33.847 --rc genhtml_legend=1 00:27:33.847 --rc geninfo_all_blocks=1 00:27:33.847 --rc geninfo_unexecuted_blocks=1 00:27:33.847 00:27:33.847 ' 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:33.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.847 --rc genhtml_branch_coverage=1 00:27:33.847 --rc genhtml_function_coverage=1 00:27:33.847 --rc genhtml_legend=1 00:27:33.847 --rc geninfo_all_blocks=1 00:27:33.847 --rc geninfo_unexecuted_blocks=1 00:27:33.847 00:27:33.847 ' 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:33.847 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:33.847 --rc genhtml_branch_coverage=1 00:27:33.847 --rc genhtml_function_coverage=1 00:27:33.847 --rc genhtml_legend=1 00:27:33.847 --rc geninfo_all_blocks=1 00:27:33.847 --rc geninfo_unexecuted_blocks=1 00:27:33.847 00:27:33.847 ' 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:27:33.847 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:27:33.848 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:33.848 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:27:33.848 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@50 -- # : 0 00:27:33.848 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:27:33.848 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:27:33.848 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:27:33.848 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:33.848 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:33.848 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:27:33.848 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:27:33.848 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:27:33.848 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:27:33.848 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@54 -- # have_pci_nics=0 00:27:33.848 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:33.848 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:27:33.848 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:27:33.848 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:33.848 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:33.848 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:33.848 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:33.848 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@260 -- # remove_target_ns 00:27:33.848 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:33.848 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:33.848 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:33.848 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:27:33.848 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:27:33.848 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # xtrace_disable 00:27:33.848 09:11:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@131 -- # pci_devs=() 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@131 -- # local -a pci_devs 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@132 -- # pci_net_devs=() 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@133 -- # pci_drivers=() 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@133 -- # local -A pci_drivers 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@135 -- # net_devs=() 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@135 -- # local -ga net_devs 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@136 -- # e810=() 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@136 -- # local -ga e810 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@137 -- # x722=() 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@137 -- # local -ga x722 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@138 -- # mlx=() 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@138 -- # local -ga mlx 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:40.426 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:40.426 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:40.426 Found net devices under 0000:86:00.0: cvl_0_0 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:40.426 Found net devices under 0000:86:00.1: cvl_0_1 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # is_hw=yes 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@257 -- # create_target_ns 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:40.426 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@28 -- # local -g _dev 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@44 -- # ips=() 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772161 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:27:40.427 10.0.0.1 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772162 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:27:40.427 10.0.0.2 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@38 -- # ping_ips 1 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=initiator0 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:40.427 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:40.427 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:40.427 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.358 ms 00:27:40.427 00:27:40.427 --- 10.0.0.1 ping statistics --- 00:27:40.427 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.427 rtt min/avg/max/mdev = 0.358/0.358/0.358/0.000 ms 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev target0 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=target0 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:27:40.428 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:40.428 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.191 ms 00:27:40.428 00:27:40.428 --- 10.0.0.2 ping statistics --- 00:27:40.428 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:40.428 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair++ )) 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@270 -- # return 0 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=initiator0 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=initiator1 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # return 1 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # dev= 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@169 -- # return 0 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev target0 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=target0 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev target1 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=target1 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # return 1 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # dev= 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@169 -- # return 0 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:27:40.428 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:27:40.429 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:40.429 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:40.429 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # nvmfpid=2507556 00:27:40.429 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@329 -- # waitforlisten 2507556 00:27:40.429 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:40.429 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 2507556 ']' 00:27:40.429 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:40.429 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:40.429 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:40.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:40.429 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:40.429 09:11:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:40.429 [2024-11-20 09:11:55.941807] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:40.429 [2024-11-20 09:11:55.942743] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:27:40.429 [2024-11-20 09:11:55.942775] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:40.429 [2024-11-20 09:11:56.023376] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:40.429 [2024-11-20 09:11:56.065263] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:40.429 [2024-11-20 09:11:56.065299] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:40.429 [2024-11-20 09:11:56.065306] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:40.429 [2024-11-20 09:11:56.065312] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:40.429 [2024-11-20 09:11:56.065318] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:40.429 [2024-11-20 09:11:56.066676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:40.429 [2024-11-20 09:11:56.066783] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.429 [2024-11-20 09:11:56.066784] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:40.429 [2024-11-20 09:11:56.133261] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:40.429 [2024-11-20 09:11:56.134101] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:40.429 [2024-11-20 09:11:56.134265] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:40.429 [2024-11-20 09:11:56.134428] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:40.429 [2024-11-20 09:11:56.199567] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:40.429 Malloc0 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:40.429 Delay0 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:40.429 [2024-11-20 09:11:56.287534] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:40.429 09:11:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:27:40.429 [2024-11-20 09:11:56.415844] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:27:42.965 Initializing NVMe Controllers 00:27:42.965 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:27:42.965 controller IO queue size 128 less than required 00:27:42.965 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:27:42.965 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:27:42.965 Initialization complete. Launching workers. 00:27:42.965 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 36828 00:27:42.965 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 36885, failed to submit 66 00:27:42.965 success 36828, unsuccessful 57, failed 0 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@335 -- # nvmfcleanup 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@99 -- # sync 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@102 -- # set +e 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@103 -- # for i in {1..20} 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:27:42.965 rmmod nvme_tcp 00:27:42.965 rmmod nvme_fabrics 00:27:42.965 rmmod nvme_keyring 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@106 -- # set -e 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@107 -- # return 0 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # '[' -n 2507556 ']' 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@337 -- # killprocess 2507556 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 2507556 ']' 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 2507556 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2507556 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2507556' 00:27:42.965 killing process with pid 2507556 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 2507556 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 2507556 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@342 -- # nvmf_fini 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@264 -- # local dev 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@267 -- # remove_target_ns 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:42.965 09:11:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@268 -- # delete_main_bridge 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@130 -- # return 0 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@41 -- # _dev=0 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@41 -- # dev_map=() 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@284 -- # iptr 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@542 -- # iptables-save 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@542 -- # iptables-restore 00:27:44.871 00:27:44.871 real 0m11.199s 00:27:44.871 user 0m10.253s 00:27:44.871 sys 0m5.738s 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:27:44.871 ************************************ 00:27:44.871 END TEST nvmf_abort 00:27:44.871 ************************************ 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@17 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:27:44.871 ************************************ 00:27:44.871 START TEST nvmf_ns_hotplug_stress 00:27:44.871 ************************************ 00:27:44.871 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:27:45.132 * Looking for test storage... 00:27:45.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:45.132 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:45.132 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:27:45.132 09:12:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:45.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.132 --rc genhtml_branch_coverage=1 00:27:45.132 --rc genhtml_function_coverage=1 00:27:45.132 --rc genhtml_legend=1 00:27:45.132 --rc geninfo_all_blocks=1 00:27:45.132 --rc geninfo_unexecuted_blocks=1 00:27:45.132 00:27:45.132 ' 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:45.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.132 --rc genhtml_branch_coverage=1 00:27:45.132 --rc genhtml_function_coverage=1 00:27:45.132 --rc genhtml_legend=1 00:27:45.132 --rc geninfo_all_blocks=1 00:27:45.132 --rc geninfo_unexecuted_blocks=1 00:27:45.132 00:27:45.132 ' 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:45.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.132 --rc genhtml_branch_coverage=1 00:27:45.132 --rc genhtml_function_coverage=1 00:27:45.132 --rc genhtml_legend=1 00:27:45.132 --rc geninfo_all_blocks=1 00:27:45.132 --rc geninfo_unexecuted_blocks=1 00:27:45.132 00:27:45.132 ' 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:45.132 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:45.132 --rc genhtml_branch_coverage=1 00:27:45.132 --rc genhtml_function_coverage=1 00:27:45.132 --rc genhtml_legend=1 00:27:45.132 --rc geninfo_all_blocks=1 00:27:45.132 --rc geninfo_unexecuted_blocks=1 00:27:45.132 00:27:45.132 ' 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:45.132 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@50 -- # : 0 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # xtrace_disable 00:27:45.133 09:12:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:51.769 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:51.769 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # pci_devs=() 00:27:51.769 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # local -a pci_devs 00:27:51.769 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # pci_net_devs=() 00:27:51.769 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:27:51.769 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # pci_drivers=() 00:27:51.769 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # local -A pci_drivers 00:27:51.769 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # net_devs=() 00:27:51.769 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # local -ga net_devs 00:27:51.769 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # e810=() 00:27:51.769 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # local -ga e810 00:27:51.769 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # x722=() 00:27:51.769 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # local -ga x722 00:27:51.769 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # mlx=() 00:27:51.769 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # local -ga mlx 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:51.770 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:51.770 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:51.770 Found net devices under 0000:86:00.0: cvl_0_0 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:51.770 Found net devices under 0000:86:00.1: cvl_0_1 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # is_hw=yes 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@257 -- # create_target_ns 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # ips=() 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:27:51.770 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:27:51.771 10.0.0.1 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:27:51.771 10.0.0.2 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@38 -- # ping_ips 1 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:27:51.771 09:12:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:27:51.771 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=initiator0 00:27:51.771 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:27:51.771 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:27:51.771 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:27:51.771 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:27:51.771 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:51.771 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:51.771 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:27:51.771 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:27:51.771 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:27:51.771 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:51.771 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:51.771 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:51.771 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:51.771 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:51.771 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:51.771 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:51.771 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:27:51.771 00:27:51.771 --- 10.0.0.1 ping statistics --- 00:27:51.771 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.771 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:27:51.771 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:27:51.771 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:27:51.771 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:51.771 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:51.771 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:51.771 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:51.771 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:27:51.771 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:27:51.771 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:27:51.771 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:27:51.771 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:27:51.771 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:27:51.771 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:27:51.772 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:51.772 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:27:51.772 00:27:51.772 --- 10.0.0.2 ping statistics --- 00:27:51.772 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:51.772 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair++ )) 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # return 0 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=initiator0 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=initiator1 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # return 1 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev= 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@169 -- # return 0 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev target1 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=target1 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # return 1 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev= 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@169 -- # return 0 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # nvmfpid=2511717 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # waitforlisten 2511717 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 2511717 ']' 00:27:51.772 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:51.773 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:51.773 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:51.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:51.773 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:51.773 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:51.773 [2024-11-20 09:12:07.205403] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:27:51.773 [2024-11-20 09:12:07.206404] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:27:51.773 [2024-11-20 09:12:07.206444] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:51.773 [2024-11-20 09:12:07.287484] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:51.773 [2024-11-20 09:12:07.327936] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:51.773 [2024-11-20 09:12:07.327978] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:51.773 [2024-11-20 09:12:07.327986] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:51.773 [2024-11-20 09:12:07.327992] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:51.773 [2024-11-20 09:12:07.327997] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:51.773 [2024-11-20 09:12:07.329320] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:27:51.773 [2024-11-20 09:12:07.329355] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:51.773 [2024-11-20 09:12:07.329356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:27:51.773 [2024-11-20 09:12:07.397163] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:27:51.773 [2024-11-20 09:12:07.397977] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:27:51.773 [2024-11-20 09:12:07.398344] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:27:51.773 [2024-11-20 09:12:07.398437] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:27:51.773 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:51.773 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:27:51.773 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:27:51.773 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:51.773 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:27:51.773 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:51.773 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:27:51.773 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:27:51.773 [2024-11-20 09:12:07.642291] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:51.773 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:27:52.032 09:12:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:52.032 [2024-11-20 09:12:08.062584] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:52.291 09:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:52.291 09:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:27:52.550 Malloc0 00:27:52.550 09:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:27:52.809 Delay0 00:27:52.809 09:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:53.067 09:12:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:27:53.067 NULL1 00:27:53.326 09:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:27:53.326 09:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2512372 00:27:53.326 09:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:27:53.326 09:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512372 00:27:53.326 09:12:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:54.703 Read completed with error (sct=0, sc=11) 00:27:54.703 09:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:54.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.703 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:54.961 09:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:27:54.961 09:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:27:54.961 true 00:27:54.961 09:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512372 00:27:54.961 09:12:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:55.896 09:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:56.155 09:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:27:56.155 09:12:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:27:56.155 true 00:27:56.155 09:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512372 00:27:56.155 09:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:56.412 09:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:56.670 09:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:27:56.670 09:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:27:56.929 true 00:27:56.929 09:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512372 00:27:56.929 09:12:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:57.864 09:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:58.122 09:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:27:58.122 09:12:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:27:58.122 true 00:27:58.380 09:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512372 00:27:58.380 09:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:58.380 09:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:58.639 09:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:27:58.639 09:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:27:58.897 true 00:27:58.897 09:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512372 00:27:58.897 09:12:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:27:59.833 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.833 09:12:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:27:59.833 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:27:59.833 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:00.091 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:00.091 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:00.091 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:00.091 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:00.091 09:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:28:00.091 09:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:28:00.348 true 00:28:00.348 09:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512372 00:28:00.348 09:12:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:01.283 09:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:01.283 09:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:28:01.283 09:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:28:01.541 true 00:28:01.541 09:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512372 00:28:01.541 09:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:01.800 09:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:02.058 09:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:28:02.058 09:12:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:28:02.058 true 00:28:02.058 09:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512372 00:28:02.058 09:12:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:03.435 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:03.435 09:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:03.435 09:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:28:03.435 09:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:28:03.694 true 00:28:03.694 09:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512372 00:28:03.694 09:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:03.694 09:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:03.953 09:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:28:03.953 09:12:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:28:04.212 true 00:28:04.212 09:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512372 00:28:04.212 09:12:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:05.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.149 09:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:05.149 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.408 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:05.408 09:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:28:05.408 09:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:28:05.666 true 00:28:05.666 09:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512372 00:28:05.666 09:12:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:06.602 09:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:06.602 09:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:28:06.602 09:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:28:06.860 true 00:28:06.860 09:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512372 00:28:06.860 09:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:07.119 09:12:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:07.119 09:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:28:07.119 09:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:28:07.377 true 00:28:07.377 09:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512372 00:28:07.377 09:12:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:08.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:08.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:08.756 09:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:08.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:08.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:08.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:08.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:08.756 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:08.756 09:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:28:08.756 09:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:28:09.014 true 00:28:09.014 09:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512372 00:28:09.014 09:12:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:09.951 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:09.951 09:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:09.951 09:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:28:09.951 09:12:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:28:10.208 true 00:28:10.208 09:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512372 00:28:10.208 09:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:10.208 09:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:10.467 09:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:28:10.467 09:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:28:10.725 true 00:28:10.725 09:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512372 00:28:10.725 09:12:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:12.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:12.100 09:12:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:12.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:12.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:12.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:12.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:12.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:12.100 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:12.100 09:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:28:12.100 09:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:28:12.358 true 00:28:12.358 09:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512372 00:28:12.358 09:12:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:13.295 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:13.295 09:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:13.295 09:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:28:13.295 09:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:28:13.553 true 00:28:13.553 09:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512372 00:28:13.553 09:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:13.812 09:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:13.812 09:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:28:13.812 09:12:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:28:14.071 true 00:28:14.071 09:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512372 00:28:14.071 09:12:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:15.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:15.445 09:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:15.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:15.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:15.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:15.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:15.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:15.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:15.445 09:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:28:15.445 09:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:28:15.704 true 00:28:15.704 09:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512372 00:28:15.704 09:12:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:16.639 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:16.639 09:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:16.639 09:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:16.639 09:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:16.897 true 00:28:16.897 09:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512372 00:28:16.897 09:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:17.157 09:12:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:17.157 09:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:17.157 09:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:17.415 true 00:28:17.415 09:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512372 00:28:17.415 09:12:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:18.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:18.791 09:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:18.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:18.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:18.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:18.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:18.791 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:18.791 09:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:18.791 09:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:19.049 true 00:28:19.049 09:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512372 00:28:19.049 09:12:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:19.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:19.985 09:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:19.985 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:19.985 09:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:19.985 09:12:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:20.243 true 00:28:20.243 09:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512372 00:28:20.244 09:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:20.502 09:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:20.760 09:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:20.760 09:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:20.760 true 00:28:20.760 09:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512372 00:28:20.760 09:12:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:22.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:22.133 09:12:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:22.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:22.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:22.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:22.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:22.133 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:22.133 09:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:22.133 09:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:22.391 true 00:28:22.391 09:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512372 00:28:22.391 09:12:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:23.326 09:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:23.584 09:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:23.584 09:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:23.584 true 00:28:23.584 09:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512372 00:28:23.584 09:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:23.584 Initializing NVMe Controllers 00:28:23.584 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:23.584 Controller IO queue size 128, less than required. 00:28:23.584 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:23.584 Controller IO queue size 128, less than required. 00:28:23.584 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:23.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:28:23.584 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:28:23.584 Initialization complete. Launching workers. 00:28:23.584 ======================================================== 00:28:23.584 Latency(us) 00:28:23.584 Device Information : IOPS MiB/s Average min max 00:28:23.584 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2036.31 0.99 43649.61 2673.22 1020783.38 00:28:23.584 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 17821.95 8.70 7186.29 1612.24 460522.57 00:28:23.584 ======================================================== 00:28:23.584 Total : 19858.25 9.70 10925.32 1612.24 1020783.38 00:28:23.584 00:28:23.843 09:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:24.101 09:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:28:24.101 09:12:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:28:24.358 true 00:28:24.358 09:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2512372 00:28:24.359 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2512372) - No such process 00:28:24.359 09:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2512372 00:28:24.359 09:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:24.359 09:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:24.617 09:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:28:24.617 09:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:28:24.617 09:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:28:24.617 09:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:24.617 09:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:28:24.874 null0 00:28:24.874 09:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:24.874 09:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:24.874 09:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:28:25.177 null1 00:28:25.177 09:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:25.177 09:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:25.177 09:12:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:28:25.177 null2 00:28:25.461 09:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:25.461 09:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:25.461 09:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:28:25.461 null3 00:28:25.461 09:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:25.461 09:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:25.461 09:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:28:25.744 null4 00:28:25.744 09:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:25.744 09:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:25.744 09:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:28:25.744 null5 00:28:25.744 09:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:25.744 09:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:25.745 09:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:28:26.003 null6 00:28:26.003 09:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:26.003 09:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:26.003 09:12:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:28:26.261 null7 00:28:26.261 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:28:26.261 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:28:26.261 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:28:26.261 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:26.261 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:26.261 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:26.261 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:28:26.261 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:26.261 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:28:26.261 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:26.261 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.261 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:26.261 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:26.261 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:28:26.261 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:26.261 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:26.261 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2517824 2517827 2517830 2517833 2517836 2517839 2517842 2517845 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.262 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:26.521 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:26.521 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:26.521 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:26.521 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:26.521 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:26.521 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:26.521 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:26.521 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:26.780 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:27.038 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.038 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.038 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:27.038 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.038 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.038 09:12:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:27.038 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.038 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.038 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.038 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.038 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.038 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:27.038 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.038 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:27.038 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:27.038 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.038 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.038 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:27.038 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.038 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.038 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:27.038 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.038 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.038 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:27.296 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:27.296 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:27.296 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:27.296 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:27.296 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:27.296 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:27.296 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:27.296 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:27.554 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.554 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.554 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:27.554 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.554 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.554 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:27.554 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.554 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.554 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:27.554 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.554 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.554 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:27.554 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.554 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.554 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:27.554 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.554 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.554 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:27.554 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.554 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.554 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:27.554 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.554 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.554 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:27.812 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:27.812 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:27.812 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:27.812 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:27.812 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:27.812 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:27.812 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:27.812 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:27.812 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.812 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.812 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:27.812 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.812 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.812 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:27.812 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.812 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.812 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:27.812 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.812 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.812 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:27.812 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.812 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.812 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:27.812 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:27.812 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:27.812 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:28.070 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:28.070 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:28.070 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:28.070 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:28.070 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:28.070 09:12:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:28.070 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:28.070 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:28.070 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:28.070 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:28.070 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:28.070 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:28.070 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:28.328 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:28.328 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:28.328 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:28.328 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:28.328 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:28.328 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:28.328 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:28.328 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:28.328 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:28.328 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:28.328 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:28.328 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:28.328 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:28.328 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:28.328 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:28.328 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:28.328 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:28.328 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:28.328 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:28.328 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:28.328 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:28.328 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:28.328 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:28.328 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:28.328 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:28.586 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:28.586 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:28.586 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:28.586 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:28.586 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:28.586 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:28.586 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:28.586 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:28.844 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:28.844 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:28.844 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:28.844 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:28.844 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:28.844 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:28.844 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:28.844 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:28.844 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:28.844 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:28.844 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:28.844 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:28.844 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:28.844 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:28.844 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:28.844 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:28.844 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:28.844 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:28.844 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:28.844 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:28.844 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:28.844 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:28.844 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:28.844 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:28.844 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:28.844 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:28.845 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:29.135 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:29.135 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:29.135 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:29.135 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:29.135 09:12:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:29.135 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:29.135 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.135 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:29.135 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:29.135 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.135 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:29.135 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:29.135 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.135 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:29.135 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:29.135 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.135 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:29.135 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:29.135 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.135 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:29.135 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:29.135 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.135 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:29.135 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:29.135 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.135 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:29.135 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:29.135 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.135 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:29.393 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:29.393 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:29.393 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:29.393 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:29.393 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:29.393 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:29.393 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:29.393 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:29.652 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:29.652 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.652 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:29.652 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:29.652 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.652 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:29.652 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:29.652 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.652 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:29.652 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:29.652 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.652 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:29.652 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:29.652 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.652 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:29.652 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:29.652 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.652 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:29.652 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:29.652 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.652 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:29.652 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:29.652 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.652 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:29.910 09:12:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:28:30.169 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:28:30.169 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:28:30.169 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:28:30.169 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:30.169 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:28:30.169 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:28:30.169 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:28:30.169 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:28:30.427 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:30.427 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.427 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:30.427 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.427 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:30.427 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.427 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:30.427 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.427 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:30.427 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.427 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:30.427 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.427 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:30.427 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.427 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:28:30.427 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:28:30.427 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:28:30.427 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:28:30.427 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:28:30.427 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@99 -- # sync 00:28:30.427 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:28:30.427 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # set +e 00:28:30.427 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:28:30.427 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:28:30.427 rmmod nvme_tcp 00:28:30.427 rmmod nvme_fabrics 00:28:30.427 rmmod nvme_keyring 00:28:30.427 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:28:30.427 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # set -e 00:28:30.427 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # return 0 00:28:30.428 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # '[' -n 2511717 ']' 00:28:30.428 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@337 -- # killprocess 2511717 00:28:30.428 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 2511717 ']' 00:28:30.428 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 2511717 00:28:30.428 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:28:30.428 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:30.428 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2511717 00:28:30.686 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:30.686 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:30.686 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2511717' 00:28:30.686 killing process with pid 2511717 00:28:30.686 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 2511717 00:28:30.686 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 2511717 00:28:30.686 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:28:30.686 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:28:30.686 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@264 -- # local dev 00:28:30.686 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@267 -- # remove_target_ns 00:28:30.686 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:30.686 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:30.686 09:12:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:33.220 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@268 -- # delete_main_bridge 00:28:33.220 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:28:33.220 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@130 -- # return 0 00:28:33.220 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:28:33.220 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:28:33.220 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:28:33.220 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:28:33.220 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:28:33.220 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:28:33.220 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:28:33.220 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:28:33.220 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:28:33.220 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:28:33.220 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:28:33.220 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:28:33.220 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:28:33.220 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:28:33.220 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:28:33.220 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:28:33.220 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:28:33.220 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # _dev=0 00:28:33.220 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:28:33.220 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@284 -- # iptr 00:28:33.220 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@542 -- # iptables-save 00:28:33.220 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@542 -- # iptables-restore 00:28:33.221 00:28:33.221 real 0m47.837s 00:28:33.221 user 2m58.222s 00:28:33.221 sys 0m19.804s 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:33.221 ************************************ 00:28:33.221 END TEST nvmf_ns_hotplug_stress 00:28:33.221 ************************************ 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:33.221 ************************************ 00:28:33.221 START TEST nvmf_delete_subsystem 00:28:33.221 ************************************ 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:28:33.221 * Looking for test storage... 00:28:33.221 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:33.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.221 --rc genhtml_branch_coverage=1 00:28:33.221 --rc genhtml_function_coverage=1 00:28:33.221 --rc genhtml_legend=1 00:28:33.221 --rc geninfo_all_blocks=1 00:28:33.221 --rc geninfo_unexecuted_blocks=1 00:28:33.221 00:28:33.221 ' 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:33.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.221 --rc genhtml_branch_coverage=1 00:28:33.221 --rc genhtml_function_coverage=1 00:28:33.221 --rc genhtml_legend=1 00:28:33.221 --rc geninfo_all_blocks=1 00:28:33.221 --rc geninfo_unexecuted_blocks=1 00:28:33.221 00:28:33.221 ' 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:33.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.221 --rc genhtml_branch_coverage=1 00:28:33.221 --rc genhtml_function_coverage=1 00:28:33.221 --rc genhtml_legend=1 00:28:33.221 --rc geninfo_all_blocks=1 00:28:33.221 --rc geninfo_unexecuted_blocks=1 00:28:33.221 00:28:33.221 ' 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:33.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:33.221 --rc genhtml_branch_coverage=1 00:28:33.221 --rc genhtml_function_coverage=1 00:28:33.221 --rc genhtml_legend=1 00:28:33.221 --rc geninfo_all_blocks=1 00:28:33.221 --rc geninfo_unexecuted_blocks=1 00:28:33.221 00:28:33.221 ' 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:33.221 09:12:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:33.221 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:33.221 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:28:33.221 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:33.221 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:28:33.221 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:33.221 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:33.221 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@50 -- # : 0 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # remove_target_ns 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # xtrace_disable 00:28:33.222 09:12:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # pci_devs=() 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # local -a pci_devs 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # pci_net_devs=() 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # pci_drivers=() 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # local -A pci_drivers 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # net_devs=() 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # local -ga net_devs 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # e810=() 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # local -ga e810 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # x722=() 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # local -ga x722 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # mlx=() 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # local -ga mlx 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:28:39.787 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:39.788 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:39.788 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:39.788 Found net devices under 0000:86:00.0: cvl_0_0 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:39.788 Found net devices under 0000:86:00.1: cvl_0_1 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # is_hw=yes 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@257 -- # create_target_ns 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@28 -- # local -g _dev 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # ips=() 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772161 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:28:39.788 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:28:39.789 10.0.0.1 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772162 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:28:39.789 10.0.0.2 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@38 -- # ping_ips 1 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=initiator0 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:28:39.789 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:39.789 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.444 ms 00:28:39.789 00:28:39.789 --- 10.0.0.1 ping statistics --- 00:28:39.789 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.789 rtt min/avg/max/mdev = 0.444/0.444/0.444/0.000 ms 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=target0 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:28:39.789 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:28:39.790 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:39.790 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:28:39.790 00:28:39.790 --- 10.0.0.2 ping statistics --- 00:28:39.790 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:39.790 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair++ )) 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # return 0 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=initiator0 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=initiator1 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # return 1 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev= 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@169 -- # return 0 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=target0 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:39.790 09:12:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev target1 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=target1 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # return 1 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev= 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@169 -- # return 0 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # nvmfpid=2522149 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # waitforlisten 2522149 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 2522149 ']' 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:39.790 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:39.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:39.791 [2024-11-20 09:12:55.111876] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:39.791 [2024-11-20 09:12:55.112807] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:28:39.791 [2024-11-20 09:12:55.112841] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:39.791 [2024-11-20 09:12:55.193469] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:28:39.791 [2024-11-20 09:12:55.234773] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:39.791 [2024-11-20 09:12:55.234811] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:39.791 [2024-11-20 09:12:55.234818] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:39.791 [2024-11-20 09:12:55.234828] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:39.791 [2024-11-20 09:12:55.234834] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:39.791 [2024-11-20 09:12:55.236043] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:39.791 [2024-11-20 09:12:55.236044] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:39.791 [2024-11-20 09:12:55.304617] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:39.791 [2024-11-20 09:12:55.305183] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:39.791 [2024-11-20 09:12:55.305365] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:39.791 [2024-11-20 09:12:55.372832] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:39.791 [2024-11-20 09:12:55.401207] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:39.791 NULL1 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:39.791 Delay0 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2522336 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:28:39.791 09:12:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:39.791 [2024-11-20 09:12:55.513997] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:41.694 09:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:41.694 09:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:41.694 09:12:57 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 starting I/O failed: -6 00:28:41.953 starting I/O failed: -6 00:28:41.953 starting I/O failed: -6 00:28:41.953 starting I/O failed: -6 00:28:41.953 starting I/O failed: -6 00:28:41.953 starting I/O failed: -6 00:28:41.953 starting I/O failed: -6 00:28:41.953 starting I/O failed: -6 00:28:41.953 starting I/O failed: -6 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 starting I/O failed: -6 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 [2024-11-20 09:12:57.755691] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9a30000c40 is same with the state(6) to be set 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Write completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:41.953 Read completed with error (sct=0, sc=8) 00:28:42.889 [2024-11-20 09:12:58.730869] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77f9a0 is same with the state(6) to be set 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 [2024-11-20 09:12:58.756673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e2c0 is same with the state(6) to be set 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 [2024-11-20 09:12:58.757085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x77e960 is same with the state(6) to be set 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 [2024-11-20 09:12:58.758363] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9a3000d800 is same with the state(6) to be set 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Read completed with error (sct=0, sc=8) 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.889 Write completed with error (sct=0, sc=8) 00:28:42.890 Write completed with error (sct=0, sc=8) 00:28:42.890 Read completed with error (sct=0, sc=8) 00:28:42.890 Read completed with error (sct=0, sc=8) 00:28:42.890 Read completed with error (sct=0, sc=8) 00:28:42.890 Write completed with error (sct=0, sc=8) 00:28:42.890 Read completed with error (sct=0, sc=8) 00:28:42.890 Read completed with error (sct=0, sc=8) 00:28:42.890 Read completed with error (sct=0, sc=8) 00:28:42.890 Read completed with error (sct=0, sc=8) 00:28:42.890 Read completed with error (sct=0, sc=8) 00:28:42.890 Read completed with error (sct=0, sc=8) 00:28:42.890 [2024-11-20 09:12:58.758875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f9a3000d020 is same with the state(6) to be set 00:28:42.890 Initializing NVMe Controllers 00:28:42.890 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:42.890 Controller IO queue size 128, less than required. 00:28:42.890 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:42.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:42.890 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:42.890 Initialization complete. Launching workers. 00:28:42.890 ======================================================== 00:28:42.890 Latency(us) 00:28:42.890 Device Information : IOPS MiB/s Average min max 00:28:42.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 180.59 0.09 920926.28 354.98 1008567.13 00:28:42.890 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 155.72 0.08 928058.03 247.59 1010267.23 00:28:42.890 ======================================================== 00:28:42.890 Total : 336.31 0.16 924228.41 247.59 1010267.23 00:28:42.890 00:28:42.890 [2024-11-20 09:12:58.759435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x77f9a0 (9): Bad file descriptor 00:28:42.890 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:28:42.890 09:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:42.890 09:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:28:42.890 09:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2522336 00:28:42.890 09:12:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:28:43.454 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:28:43.454 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2522336 00:28:43.454 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2522336) - No such process 00:28:43.454 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2522336 00:28:43.454 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:28:43.454 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 2522336 00:28:43.454 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:28:43.454 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:43.454 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:28:43.454 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:43.454 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 2522336 00:28:43.454 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:28:43.454 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:43.454 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:43.454 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:43.454 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:43.454 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.454 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.454 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.454 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:43.454 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.454 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.454 [2024-11-20 09:12:59.289069] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:43.454 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.454 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:43.454 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:43.454 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:43.454 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:43.454 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2522819 00:28:43.454 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:28:43.454 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:28:43.454 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2522819 00:28:43.454 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:43.454 [2024-11-20 09:12:59.372678] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:28:44.018 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:44.018 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2522819 00:28:44.018 09:12:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:44.274 09:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:44.274 09:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2522819 00:28:44.274 09:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:44.840 09:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:44.840 09:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2522819 00:28:44.840 09:13:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:45.566 09:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:45.566 09:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2522819 00:28:45.566 09:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:45.824 09:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:45.824 09:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2522819 00:28:45.824 09:13:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:46.390 09:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:46.390 09:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2522819 00:28:46.390 09:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:28:46.647 Initializing NVMe Controllers 00:28:46.647 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:28:46.647 Controller IO queue size 128, less than required. 00:28:46.647 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:28:46.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:28:46.647 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:28:46.647 Initialization complete. Launching workers. 00:28:46.647 ======================================================== 00:28:46.647 Latency(us) 00:28:46.647 Device Information : IOPS MiB/s Average min max 00:28:46.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002384.23 1000195.97 1041195.18 00:28:46.647 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005192.11 1000190.09 1042782.00 00:28:46.647 ======================================================== 00:28:46.647 Total : 256.00 0.12 1003788.17 1000190.09 1042782.00 00:28:46.647 00:28:46.906 09:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:28:46.906 09:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2522819 00:28:46.906 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2522819) - No such process 00:28:46.906 09:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2522819 00:28:46.906 09:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:28:46.906 09:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:28:46.906 09:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:28:46.906 09:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@99 -- # sync 00:28:46.906 09:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:28:46.906 09:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # set +e 00:28:46.906 09:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:28:46.906 09:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:28:46.906 rmmod nvme_tcp 00:28:46.906 rmmod nvme_fabrics 00:28:46.906 rmmod nvme_keyring 00:28:46.906 09:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:28:46.906 09:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # set -e 00:28:46.906 09:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # return 0 00:28:46.906 09:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # '[' -n 2522149 ']' 00:28:46.906 09:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@337 -- # killprocess 2522149 00:28:46.906 09:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 2522149 ']' 00:28:46.906 09:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 2522149 00:28:46.906 09:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:28:46.906 09:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:46.906 09:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2522149 00:28:47.166 09:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:47.166 09:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:47.166 09:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2522149' 00:28:47.166 killing process with pid 2522149 00:28:47.166 09:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 2522149 00:28:47.166 09:13:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 2522149 00:28:47.166 09:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:28:47.166 09:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # nvmf_fini 00:28:47.166 09:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@264 -- # local dev 00:28:47.166 09:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@267 -- # remove_target_ns 00:28:47.166 09:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:47.166 09:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:47.166 09:13:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@268 -- # delete_main_bridge 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@130 -- # return 0 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # _dev=0 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # dev_map=() 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@284 -- # iptr 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@542 -- # iptables-save 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@542 -- # iptables-restore 00:28:49.707 00:28:49.707 real 0m16.371s 00:28:49.707 user 0m26.304s 00:28:49.707 sys 0m6.412s 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:28:49.707 ************************************ 00:28:49.707 END TEST nvmf_delete_subsystem 00:28:49.707 ************************************ 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:49.707 ************************************ 00:28:49.707 START TEST nvmf_host_management 00:28:49.707 ************************************ 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:28:49.707 * Looking for test storage... 00:28:49.707 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:49.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.707 --rc genhtml_branch_coverage=1 00:28:49.707 --rc genhtml_function_coverage=1 00:28:49.707 --rc genhtml_legend=1 00:28:49.707 --rc geninfo_all_blocks=1 00:28:49.707 --rc geninfo_unexecuted_blocks=1 00:28:49.707 00:28:49.707 ' 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:49.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.707 --rc genhtml_branch_coverage=1 00:28:49.707 --rc genhtml_function_coverage=1 00:28:49.707 --rc genhtml_legend=1 00:28:49.707 --rc geninfo_all_blocks=1 00:28:49.707 --rc geninfo_unexecuted_blocks=1 00:28:49.707 00:28:49.707 ' 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:49.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.707 --rc genhtml_branch_coverage=1 00:28:49.707 --rc genhtml_function_coverage=1 00:28:49.707 --rc genhtml_legend=1 00:28:49.707 --rc geninfo_all_blocks=1 00:28:49.707 --rc geninfo_unexecuted_blocks=1 00:28:49.707 00:28:49.707 ' 00:28:49.707 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:49.707 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:49.707 --rc genhtml_branch_coverage=1 00:28:49.707 --rc genhtml_function_coverage=1 00:28:49.708 --rc genhtml_legend=1 00:28:49.708 --rc geninfo_all_blocks=1 00:28:49.708 --rc geninfo_unexecuted_blocks=1 00:28:49.708 00:28:49.708 ' 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@50 -- # : 0 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@54 -- # have_pci_nics=0 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@296 -- # prepare_net_devs 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # local -g is_hw=no 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@260 -- # remove_target_ns 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # xtrace_disable 00:28:49.708 09:13:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@131 -- # pci_devs=() 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@131 -- # local -a pci_devs 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@132 -- # pci_net_devs=() 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@133 -- # pci_drivers=() 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@133 -- # local -A pci_drivers 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@135 -- # net_devs=() 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@135 -- # local -ga net_devs 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@136 -- # e810=() 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@136 -- # local -ga e810 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@137 -- # x722=() 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@137 -- # local -ga x722 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@138 -- # mlx=() 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@138 -- # local -ga mlx 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:56.275 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:56.275 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:56.275 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:56.276 Found net devices under 0000:86:00.0: cvl_0_0 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:56.276 Found net devices under 0000:86:00.1: cvl_0_1 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # is_hw=yes 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@257 -- # create_target_ns 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@27 -- # local -gA dev_map 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@28 -- # local -g _dev 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@44 -- # ips=() 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772161 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:28:56.276 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:28:56.277 10.0.0.1 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772162 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:28:56.277 10.0.0.2 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@38 -- # ping_ips 1 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=initiator0 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:28:56.277 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:56.277 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.437 ms 00:28:56.277 00:28:56.277 --- 10.0.0.1 ping statistics --- 00:28:56.277 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.277 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target0 00:28:56.277 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target0 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:28:56.278 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:56.278 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:28:56.278 00:28:56.278 --- 10.0.0.2 ping statistics --- 00:28:56.278 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:56.278 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair++ )) 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@270 -- # return 0 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=initiator0 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=initiator1 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # return 1 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # dev= 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@169 -- # return 0 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target0 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target0 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target1 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target1 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:28:56.278 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # return 1 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # dev= 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@169 -- # return 0 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # nvmfpid=2527033 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@329 -- # waitforlisten 2527033 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2527033 ']' 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:56.279 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:56.279 [2024-11-20 09:13:11.560835] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:56.279 [2024-11-20 09:13:11.561819] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:28:56.279 [2024-11-20 09:13:11.561858] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:56.279 [2024-11-20 09:13:11.629705] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:56.279 [2024-11-20 09:13:11.675254] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:56.279 [2024-11-20 09:13:11.675295] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:56.279 [2024-11-20 09:13:11.675303] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:56.279 [2024-11-20 09:13:11.675309] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:56.279 [2024-11-20 09:13:11.675314] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:56.279 [2024-11-20 09:13:11.676832] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:56.279 [2024-11-20 09:13:11.676871] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:56.279 [2024-11-20 09:13:11.676988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:56.279 [2024-11-20 09:13:11.676988] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:28:56.279 [2024-11-20 09:13:11.745487] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:56.279 [2024-11-20 09:13:11.746561] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:28:56.279 [2024-11-20 09:13:11.746563] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:56.279 [2024-11-20 09:13:11.746933] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:56.279 [2024-11-20 09:13:11.746993] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:56.279 [2024-11-20 09:13:11.817746] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:56.279 Malloc0 00:28:56.279 [2024-11-20 09:13:11.901978] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2527082 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2527082 /var/tmp/bdevperf.sock 00:28:56.279 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 2527082 ']' 00:28:56.280 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:56.280 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:28:56.280 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:28:56.280 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:56.280 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:56.280 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:56.280 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:28:56.280 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:56.280 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:28:56.280 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:56.280 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:56.280 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:56.280 { 00:28:56.280 "params": { 00:28:56.280 "name": "Nvme$subsystem", 00:28:56.280 "trtype": "$TEST_TRANSPORT", 00:28:56.280 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:56.280 "adrfam": "ipv4", 00:28:56.280 "trsvcid": "$NVMF_PORT", 00:28:56.280 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:56.280 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:56.280 "hdgst": ${hdgst:-false}, 00:28:56.280 "ddgst": ${ddgst:-false} 00:28:56.280 }, 00:28:56.280 "method": "bdev_nvme_attach_controller" 00:28:56.280 } 00:28:56.280 EOF 00:28:56.280 )") 00:28:56.280 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:28:56.280 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:28:56.280 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:28:56.280 09:13:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:28:56.280 "params": { 00:28:56.280 "name": "Nvme0", 00:28:56.280 "trtype": "tcp", 00:28:56.280 "traddr": "10.0.0.2", 00:28:56.280 "adrfam": "ipv4", 00:28:56.280 "trsvcid": "4420", 00:28:56.280 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:56.280 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:56.280 "hdgst": false, 00:28:56.280 "ddgst": false 00:28:56.280 }, 00:28:56.280 "method": "bdev_nvme_attach_controller" 00:28:56.280 }' 00:28:56.280 [2024-11-20 09:13:11.996650] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:28:56.280 [2024-11-20 09:13:11.996698] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2527082 ] 00:28:56.280 [2024-11-20 09:13:12.072908] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:56.280 [2024-11-20 09:13:12.117311] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:56.280 Running I/O for 10 seconds... 00:28:56.539 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:56.539 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:28:56.539 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:28:56.539 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.539 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:56.539 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.539 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:56.539 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:28:56.539 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:28:56.539 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:28:56.539 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:28:56.539 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:28:56.539 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:28:56.539 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:56.539 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:56.539 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:56.539 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.539 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:56.539 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.539 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:28:56.539 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:28:56.539 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:28:56.800 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:28:56.800 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:28:56.800 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:28:56.800 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:28:56.800 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.800 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:56.800 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.800 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=673 00:28:56.800 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 673 -ge 100 ']' 00:28:56.800 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:28:56.800 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:28:56.800 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:28:56.800 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:56.800 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.800 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:56.800 [2024-11-20 09:13:12.673560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673602] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673617] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673701] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673707] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673719] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673725] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673737] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673744] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673750] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673774] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673780] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673786] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673791] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673803] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673809] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673816] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.673822] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x11bcec0 is same with the state(6) to be set 00:28:56.800 [2024-11-20 09:13:12.675114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:98304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.800 [2024-11-20 09:13:12.675147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.800 [2024-11-20 09:13:12.675162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:98432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.800 [2024-11-20 09:13:12.675170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.800 [2024-11-20 09:13:12.675179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:98560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.800 [2024-11-20 09:13:12.675186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.800 [2024-11-20 09:13:12.675195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:98688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.800 [2024-11-20 09:13:12.675201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.800 [2024-11-20 09:13:12.675210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:98816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.800 [2024-11-20 09:13:12.675216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.800 [2024-11-20 09:13:12.675224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:98944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.800 [2024-11-20 09:13:12.675231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.800 [2024-11-20 09:13:12.675239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:99072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.800 [2024-11-20 09:13:12.675245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.800 [2024-11-20 09:13:12.675253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:99200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.800 [2024-11-20 09:13:12.675264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.800 [2024-11-20 09:13:12.675272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.800 [2024-11-20 09:13:12.675280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.800 [2024-11-20 09:13:12.675288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:99456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.800 [2024-11-20 09:13:12.675295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.800 [2024-11-20 09:13:12.675302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:99584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:99840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:99968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:100096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:100224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:100352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:100480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:100608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:100736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:100864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:100992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:101120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:101248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:101376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:101504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:101632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:101760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:101888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:102016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:102144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:102272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:102400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:102528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:102656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:102784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:102912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:103040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:103168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:103296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:103424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:103552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:103680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:103808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:103936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:104064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:104192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:104320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:104448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.801 [2024-11-20 09:13:12.675883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:104576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.801 [2024-11-20 09:13:12.675890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.802 [2024-11-20 09:13:12.675898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:104704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.802 [2024-11-20 09:13:12.675904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.802 [2024-11-20 09:13:12.675912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:104832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.802 [2024-11-20 09:13:12.675918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.802 [2024-11-20 09:13:12.675926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:104960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.802 [2024-11-20 09:13:12.675933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.802 [2024-11-20 09:13:12.675942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.802 [2024-11-20 09:13:12.675955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.802 [2024-11-20 09:13:12.675964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.802 [2024-11-20 09:13:12.675970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.802 [2024-11-20 09:13:12.675978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:105344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.802 [2024-11-20 09:13:12.675984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.802 [2024-11-20 09:13:12.675992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:105472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.802 [2024-11-20 09:13:12.675999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.802 [2024-11-20 09:13:12.676008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.802 [2024-11-20 09:13:12.676015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.802 [2024-11-20 09:13:12.676023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.802 [2024-11-20 09:13:12.676029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.802 [2024-11-20 09:13:12.676037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:105856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.802 [2024-11-20 09:13:12.676043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.802 [2024-11-20 09:13:12.676051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.802 [2024-11-20 09:13:12.676057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.802 [2024-11-20 09:13:12.676066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:106112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.802 [2024-11-20 09:13:12.676072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.802 [2024-11-20 09:13:12.676080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.802 [2024-11-20 09:13:12.676086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.802 [2024-11-20 09:13:12.676094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:106368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:28:56.802 [2024-11-20 09:13:12.676100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.802 [2024-11-20 09:13:12.676126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:28:56.802 [2024-11-20 09:13:12.677066] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:28:56.802 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.802 task offset: 98304 on job bdev=Nvme0n1 fails 00:28:56.802 00:28:56.802 Latency(us) 00:28:56.802 [2024-11-20T08:13:12.843Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:56.802 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:56.802 Job: Nvme0n1 ended in about 0.40 seconds with error 00:28:56.802 Verification LBA range: start 0x0 length 0x400 00:28:56.802 Nvme0n1 : 0.40 1907.28 119.21 158.94 0.00 30130.99 1574.29 27354.16 00:28:56.802 [2024-11-20T08:13:12.843Z] =================================================================================================================== 00:28:56.802 [2024-11-20T08:13:12.843Z] Total : 1907.28 119.21 158.94 0.00 30130.99 1574.29 27354.16 00:28:56.802 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:28:56.802 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:56.802 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:28:56.802 [2024-11-20 09:13:12.679492] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:28:56.802 [2024-11-20 09:13:12.679517] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189b500 (9): Bad file descriptor 00:28:56.802 [2024-11-20 09:13:12.680427] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:28:56.802 [2024-11-20 09:13:12.680496] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:28:56.802 [2024-11-20 09:13:12.680519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:28:56.802 [2024-11-20 09:13:12.680534] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:28:56.802 [2024-11-20 09:13:12.680542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:28:56.802 [2024-11-20 09:13:12.680549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:28:56.802 [2024-11-20 09:13:12.680555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x189b500 00:28:56.802 [2024-11-20 09:13:12.680574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x189b500 (9): Bad file descriptor 00:28:56.802 [2024-11-20 09:13:12.680586] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:28:56.802 [2024-11-20 09:13:12.680593] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:28:56.802 [2024-11-20 09:13:12.680601] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:28:56.802 [2024-11-20 09:13:12.680609] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:28:56.802 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:56.802 09:13:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:28:57.736 09:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2527082 00:28:57.736 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2527082) - No such process 00:28:57.736 09:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:28:57.736 09:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:28:57.736 09:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:28:57.736 09:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:28:57.736 09:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:28:57.736 09:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:28:57.736 09:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:28:57.736 09:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:28:57.736 { 00:28:57.736 "params": { 00:28:57.736 "name": "Nvme$subsystem", 00:28:57.736 "trtype": "$TEST_TRANSPORT", 00:28:57.736 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:57.736 "adrfam": "ipv4", 00:28:57.736 "trsvcid": "$NVMF_PORT", 00:28:57.736 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:57.736 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:57.736 "hdgst": ${hdgst:-false}, 00:28:57.736 "ddgst": ${ddgst:-false} 00:28:57.736 }, 00:28:57.736 "method": "bdev_nvme_attach_controller" 00:28:57.736 } 00:28:57.736 EOF 00:28:57.736 )") 00:28:57.736 09:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:28:57.736 09:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:28:57.736 09:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:28:57.736 09:13:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:28:57.736 "params": { 00:28:57.736 "name": "Nvme0", 00:28:57.736 "trtype": "tcp", 00:28:57.736 "traddr": "10.0.0.2", 00:28:57.736 "adrfam": "ipv4", 00:28:57.736 "trsvcid": "4420", 00:28:57.736 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:57.736 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:57.736 "hdgst": false, 00:28:57.736 "ddgst": false 00:28:57.736 }, 00:28:57.736 "method": "bdev_nvme_attach_controller" 00:28:57.736 }' 00:28:57.736 [2024-11-20 09:13:13.746869] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:28:57.736 [2024-11-20 09:13:13.746917] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2527325 ] 00:28:57.995 [2024-11-20 09:13:13.822470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:57.995 [2024-11-20 09:13:13.861961] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:58.252 Running I/O for 1 seconds... 00:28:59.187 1984.00 IOPS, 124.00 MiB/s 00:28:59.187 Latency(us) 00:28:59.187 [2024-11-20T08:13:15.228Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:59.187 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:28:59.187 Verification LBA range: start 0x0 length 0x400 00:28:59.187 Nvme0n1 : 1.01 2031.70 126.98 0.00 0.00 30997.85 5299.87 27126.21 00:28:59.187 [2024-11-20T08:13:15.228Z] =================================================================================================================== 00:28:59.187 [2024-11-20T08:13:15.228Z] Total : 2031.70 126.98 0.00 0.00 30997.85 5299.87 27126.21 00:28:59.446 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:28:59.446 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:28:59.446 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:28:59.446 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:28:59.446 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:28:59.446 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@335 -- # nvmfcleanup 00:28:59.446 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@99 -- # sync 00:28:59.446 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:28:59.446 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@102 -- # set +e 00:28:59.446 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@103 -- # for i in {1..20} 00:28:59.446 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:28:59.446 rmmod nvme_tcp 00:28:59.446 rmmod nvme_fabrics 00:28:59.446 rmmod nvme_keyring 00:28:59.446 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:28:59.446 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@106 -- # set -e 00:28:59.446 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@107 -- # return 0 00:28:59.446 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # '[' -n 2527033 ']' 00:28:59.446 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@337 -- # killprocess 2527033 00:28:59.446 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 2527033 ']' 00:28:59.446 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 2527033 00:28:59.446 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:28:59.446 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:59.446 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2527033 00:28:59.706 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:59.706 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:59.706 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2527033' 00:28:59.706 killing process with pid 2527033 00:28:59.706 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 2527033 00:28:59.706 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 2527033 00:28:59.706 [2024-11-20 09:13:15.657662] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:28:59.706 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:28:59.706 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@342 -- # nvmf_fini 00:28:59.706 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@264 -- # local dev 00:28:59.706 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@267 -- # remove_target_ns 00:28:59.706 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:59.706 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:59.706 09:13:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@268 -- # delete_main_bridge 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@130 -- # return 0 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@41 -- # _dev=0 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@41 -- # dev_map=() 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@284 -- # iptr 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@542 -- # iptables-save 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@542 -- # iptables-restore 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:02.241 00:29:02.241 real 0m12.506s 00:29:02.241 user 0m18.155s 00:29:02.241 sys 0m6.346s 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:02.241 ************************************ 00:29:02.241 END TEST nvmf_host_management 00:29:02.241 ************************************ 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:02.241 ************************************ 00:29:02.241 START TEST nvmf_lvol 00:29:02.241 ************************************ 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:02.241 * Looking for test storage... 00:29:02.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:29:02.241 09:13:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:02.241 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:02.241 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:02.241 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:02.241 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:02.241 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:29:02.241 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:29:02.241 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:29:02.241 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:29:02.241 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:29:02.241 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:29:02.241 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:29:02.241 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:02.241 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:29:02.241 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:29:02.241 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:02.241 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:02.241 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:29:02.241 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:29:02.241 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:02.241 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:29:02.241 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:02.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.242 --rc genhtml_branch_coverage=1 00:29:02.242 --rc genhtml_function_coverage=1 00:29:02.242 --rc genhtml_legend=1 00:29:02.242 --rc geninfo_all_blocks=1 00:29:02.242 --rc geninfo_unexecuted_blocks=1 00:29:02.242 00:29:02.242 ' 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:02.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.242 --rc genhtml_branch_coverage=1 00:29:02.242 --rc genhtml_function_coverage=1 00:29:02.242 --rc genhtml_legend=1 00:29:02.242 --rc geninfo_all_blocks=1 00:29:02.242 --rc geninfo_unexecuted_blocks=1 00:29:02.242 00:29:02.242 ' 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:02.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.242 --rc genhtml_branch_coverage=1 00:29:02.242 --rc genhtml_function_coverage=1 00:29:02.242 --rc genhtml_legend=1 00:29:02.242 --rc geninfo_all_blocks=1 00:29:02.242 --rc geninfo_unexecuted_blocks=1 00:29:02.242 00:29:02.242 ' 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:02.242 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:02.242 --rc genhtml_branch_coverage=1 00:29:02.242 --rc genhtml_function_coverage=1 00:29:02.242 --rc genhtml_legend=1 00:29:02.242 --rc geninfo_all_blocks=1 00:29:02.242 --rc geninfo_unexecuted_blocks=1 00:29:02.242 00:29:02.242 ' 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@50 -- # : 0 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@54 -- # have_pci_nics=0 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@296 -- # prepare_net_devs 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # local -g is_hw=no 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@260 -- # remove_target_ns 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # xtrace_disable 00:29:02.242 09:13:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@131 -- # pci_devs=() 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@131 -- # local -a pci_devs 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@132 -- # pci_net_devs=() 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@133 -- # pci_drivers=() 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@133 -- # local -A pci_drivers 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@135 -- # net_devs=() 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@135 -- # local -ga net_devs 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@136 -- # e810=() 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@136 -- # local -ga e810 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@137 -- # x722=() 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@137 -- # local -ga x722 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@138 -- # mlx=() 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@138 -- # local -ga mlx 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:08.810 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:08.810 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:08.810 Found net devices under 0000:86:00.0: cvl_0_0 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:08.810 Found net devices under 0000:86:00.1: cvl_0_1 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # is_hw=yes 00:29:08.810 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@257 -- # create_target_ns 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@27 -- # local -gA dev_map 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@28 -- # local -g _dev 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@44 -- # ips=() 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772161 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:29:08.811 10.0.0.1 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772162 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:29:08.811 10.0.0.2 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:29:08.811 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@38 -- # ping_ips 1 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=initiator0 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:29:08.812 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:08.812 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.429 ms 00:29:08.812 00:29:08.812 --- 10.0.0.1 ping statistics --- 00:29:08.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.812 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target0 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target0 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:29:08.812 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:08.812 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:29:08.812 00:29:08.812 --- 10.0.0.2 ping statistics --- 00:29:08.812 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:08.812 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair++ )) 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@270 -- # return 0 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:29:08.812 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=initiator0 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=initiator1 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # return 1 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # dev= 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@169 -- # return 0 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target0 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target0 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:29:08.813 09:13:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target1 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target1 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # return 1 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # dev= 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@169 -- # return 0 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # nvmfpid=2531159 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@329 -- # waitforlisten 2531159 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 2531159 ']' 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:08.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:08.813 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:08.813 [2024-11-20 09:13:24.113679] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:08.813 [2024-11-20 09:13:24.114652] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:29:08.814 [2024-11-20 09:13:24.114689] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:08.814 [2024-11-20 09:13:24.194052] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:08.814 [2024-11-20 09:13:24.236509] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:08.814 [2024-11-20 09:13:24.236547] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:08.814 [2024-11-20 09:13:24.236554] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:08.814 [2024-11-20 09:13:24.236560] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:08.814 [2024-11-20 09:13:24.236566] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:08.814 [2024-11-20 09:13:24.237998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:08.814 [2024-11-20 09:13:24.238107] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:08.814 [2024-11-20 09:13:24.238108] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:08.814 [2024-11-20 09:13:24.306623] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:08.814 [2024-11-20 09:13:24.307293] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:08.814 [2024-11-20 09:13:24.307359] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:08.814 [2024-11-20 09:13:24.307572] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:08.814 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:08.814 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:29:08.814 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:29:08.814 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:08.814 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:08.814 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:08.814 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:08.814 [2024-11-20 09:13:24.546836] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:08.814 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:08.814 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:29:08.814 09:13:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:09.074 09:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:29:09.074 09:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:29:09.332 09:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:29:09.591 09:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=71be7c0a-c03c-4010-a44f-a0f17ace45cc 00:29:09.591 09:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 71be7c0a-c03c-4010-a44f-a0f17ace45cc lvol 20 00:29:09.850 09:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=200cfc69-cc5b-4595-81d3-8dbceda63116 00:29:09.850 09:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:09.850 09:13:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 200cfc69-cc5b-4595-81d3-8dbceda63116 00:29:10.109 09:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:10.368 [2024-11-20 09:13:26.202759] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:10.368 09:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:10.626 09:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2531591 00:29:10.626 09:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:10.626 09:13:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:11.562 09:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 200cfc69-cc5b-4595-81d3-8dbceda63116 MY_SNAPSHOT 00:29:11.820 09:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=bf2d765a-eae2-4e50-b9a7-d582a16411ee 00:29:11.820 09:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 200cfc69-cc5b-4595-81d3-8dbceda63116 30 00:29:12.136 09:13:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone bf2d765a-eae2-4e50-b9a7-d582a16411ee MY_CLONE 00:29:12.136 09:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=da276096-bf80-41ea-ae9b-8a789942909c 00:29:12.136 09:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate da276096-bf80-41ea-ae9b-8a789942909c 00:29:12.705 09:13:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2531591 00:29:20.823 Initializing NVMe Controllers 00:29:20.823 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:20.823 Controller IO queue size 128, less than required. 00:29:20.823 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:20.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:20.823 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:20.823 Initialization complete. Launching workers. 00:29:20.823 ======================================================== 00:29:20.823 Latency(us) 00:29:20.823 Device Information : IOPS MiB/s Average min max 00:29:20.823 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12012.90 46.93 10659.57 5692.34 50823.51 00:29:20.823 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11861.60 46.33 10791.80 3918.57 60653.47 00:29:20.823 ======================================================== 00:29:20.823 Total : 23874.50 93.26 10725.26 3918.57 60653.47 00:29:20.823 00:29:21.081 09:13:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:21.081 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 200cfc69-cc5b-4595-81d3-8dbceda63116 00:29:21.340 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 71be7c0a-c03c-4010-a44f-a0f17ace45cc 00:29:21.608 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:21.608 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:21.608 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:21.608 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@335 -- # nvmfcleanup 00:29:21.608 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@99 -- # sync 00:29:21.608 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:29:21.609 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@102 -- # set +e 00:29:21.609 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@103 -- # for i in {1..20} 00:29:21.609 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:29:21.609 rmmod nvme_tcp 00:29:21.609 rmmod nvme_fabrics 00:29:21.609 rmmod nvme_keyring 00:29:21.609 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:29:21.609 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@106 -- # set -e 00:29:21.609 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@107 -- # return 0 00:29:21.609 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # '[' -n 2531159 ']' 00:29:21.609 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@337 -- # killprocess 2531159 00:29:21.609 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 2531159 ']' 00:29:21.609 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 2531159 00:29:21.609 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:29:21.609 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:21.609 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2531159 00:29:21.610 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:21.610 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:21.610 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2531159' 00:29:21.612 killing process with pid 2531159 00:29:21.613 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 2531159 00:29:21.613 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 2531159 00:29:21.882 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:29:21.882 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@342 -- # nvmf_fini 00:29:21.882 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@264 -- # local dev 00:29:21.882 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@267 -- # remove_target_ns 00:29:21.882 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:21.882 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:21.882 09:13:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@268 -- # delete_main_bridge 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@130 -- # return 0 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@41 -- # _dev=0 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@41 -- # dev_map=() 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@284 -- # iptr 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@542 -- # iptables-save 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@542 -- # iptables-restore 00:29:24.418 00:29:24.418 real 0m22.043s 00:29:24.418 user 0m55.872s 00:29:24.418 sys 0m9.930s 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:24.418 ************************************ 00:29:24.418 END TEST nvmf_lvol 00:29:24.418 ************************************ 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:24.418 ************************************ 00:29:24.418 START TEST nvmf_lvs_grow 00:29:24.418 ************************************ 00:29:24.418 09:13:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:29:24.418 * Looking for test storage... 00:29:24.418 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:24.418 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:24.418 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:29:24.418 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:24.418 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:24.418 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:24.418 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:24.418 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:24.418 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:29:24.418 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:29:24.418 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:29:24.418 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:29:24.418 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:29:24.418 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:29:24.418 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:29:24.418 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:24.418 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:29:24.418 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:29:24.418 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:24.418 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:24.418 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:29:24.418 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:29:24.418 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:24.418 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:29:24.418 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:29:24.418 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:29:24.418 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:29:24.418 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:24.418 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:24.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.419 --rc genhtml_branch_coverage=1 00:29:24.419 --rc genhtml_function_coverage=1 00:29:24.419 --rc genhtml_legend=1 00:29:24.419 --rc geninfo_all_blocks=1 00:29:24.419 --rc geninfo_unexecuted_blocks=1 00:29:24.419 00:29:24.419 ' 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:24.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.419 --rc genhtml_branch_coverage=1 00:29:24.419 --rc genhtml_function_coverage=1 00:29:24.419 --rc genhtml_legend=1 00:29:24.419 --rc geninfo_all_blocks=1 00:29:24.419 --rc geninfo_unexecuted_blocks=1 00:29:24.419 00:29:24.419 ' 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:24.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.419 --rc genhtml_branch_coverage=1 00:29:24.419 --rc genhtml_function_coverage=1 00:29:24.419 --rc genhtml_legend=1 00:29:24.419 --rc geninfo_all_blocks=1 00:29:24.419 --rc geninfo_unexecuted_blocks=1 00:29:24.419 00:29:24.419 ' 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:24.419 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:24.419 --rc genhtml_branch_coverage=1 00:29:24.419 --rc genhtml_function_coverage=1 00:29:24.419 --rc genhtml_legend=1 00:29:24.419 --rc geninfo_all_blocks=1 00:29:24.419 --rc geninfo_unexecuted_blocks=1 00:29:24.419 00:29:24.419 ' 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@50 -- # : 0 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:29:24.419 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:29:24.420 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:29:24.420 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:29:24.420 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@54 -- # have_pci_nics=0 00:29:24.420 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:24.420 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:29:24.420 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:29:24.420 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:29:24.420 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:24.420 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@296 -- # prepare_net_devs 00:29:24.420 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # local -g is_hw=no 00:29:24.420 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@260 -- # remove_target_ns 00:29:24.420 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:24.420 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:24.420 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:24.420 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:29:24.420 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:29:24.420 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # xtrace_disable 00:29:24.420 09:13:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:30.994 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:30.994 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@131 -- # pci_devs=() 00:29:30.994 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@131 -- # local -a pci_devs 00:29:30.994 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@132 -- # pci_net_devs=() 00:29:30.994 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:29:30.994 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@133 -- # pci_drivers=() 00:29:30.994 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@133 -- # local -A pci_drivers 00:29:30.994 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@135 -- # net_devs=() 00:29:30.994 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@135 -- # local -ga net_devs 00:29:30.994 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@136 -- # e810=() 00:29:30.994 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@136 -- # local -ga e810 00:29:30.994 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@137 -- # x722=() 00:29:30.994 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@137 -- # local -ga x722 00:29:30.994 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@138 -- # mlx=() 00:29:30.994 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@138 -- # local -ga mlx 00:29:30.994 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:30.994 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:30.995 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:30.995 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:30.995 Found net devices under 0000:86:00.0: cvl_0_0 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:30.995 Found net devices under 0000:86:00.1: cvl_0_1 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # is_hw=yes 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@257 -- # create_target_ns 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@27 -- # local -gA dev_map 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@28 -- # local -g _dev 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # ips=() 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:30.995 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772161 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:29:30.996 10.0.0.1 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772162 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:29:30.996 10.0.0.2 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:29:30.996 09:13:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@38 -- # ping_ips 1 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=initiator0 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:29:30.996 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:30.996 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.371 ms 00:29:30.996 00:29:30.996 --- 10.0.0.1 ping statistics --- 00:29:30.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.996 rtt min/avg/max/mdev = 0.371/0.371/0.371/0.000 ms 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target0 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target0 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:29:30.996 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:30.996 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.133 ms 00:29:30.996 00:29:30.996 --- 10.0.0.2 ping statistics --- 00:29:30.996 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.996 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair++ )) 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:29:30.996 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@270 -- # return 0 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=initiator0 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=initiator1 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # return 1 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev= 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@169 -- # return 0 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target0 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target0 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target1 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target1 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # return 1 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev= 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@169 -- # return 0 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # nvmfpid=2536961 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@329 -- # waitforlisten 2536961 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 2536961 ']' 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:30.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:30.997 [2024-11-20 09:13:46.236973] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:30.997 [2024-11-20 09:13:46.237902] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:29:30.997 [2024-11-20 09:13:46.237936] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:30.997 [2024-11-20 09:13:46.317367] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:30.997 [2024-11-20 09:13:46.358537] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:30.997 [2024-11-20 09:13:46.358576] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:30.997 [2024-11-20 09:13:46.358583] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:30.997 [2024-11-20 09:13:46.358589] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:30.997 [2024-11-20 09:13:46.358594] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:30.997 [2024-11-20 09:13:46.359158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:30.997 [2024-11-20 09:13:46.426335] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:30.997 [2024-11-20 09:13:46.426566] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:30.997 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:29:30.998 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:29:30.998 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:30.998 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:30.998 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:30.998 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:30.998 [2024-11-20 09:13:46.655841] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:30.998 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:29:30.998 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:29:30.998 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:30.998 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:30.998 ************************************ 00:29:30.998 START TEST lvs_grow_clean 00:29:30.998 ************************************ 00:29:30.998 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:29:30.998 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:30.998 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:30.998 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:30.998 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:30.998 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:30.998 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:30.998 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:30.998 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:30.998 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:30.998 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:30.998 09:13:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:31.257 09:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=42737f11-7e12-4fc8-b889-a5d1b22975da 00:29:31.257 09:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42737f11-7e12-4fc8-b889-a5d1b22975da 00:29:31.257 09:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:31.515 09:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:31.515 09:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:31.515 09:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 42737f11-7e12-4fc8-b889-a5d1b22975da lvol 150 00:29:31.773 09:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=8bc19195-630a-493f-a294-a907e8f74db9 00:29:31.773 09:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:31.773 09:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:31.773 [2024-11-20 09:13:47.735552] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:31.773 [2024-11-20 09:13:47.735680] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:31.773 true 00:29:31.773 09:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42737f11-7e12-4fc8-b889-a5d1b22975da 00:29:31.773 09:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:32.031 09:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:32.031 09:13:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:32.289 09:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8bc19195-630a-493f-a294-a907e8f74db9 00:29:32.548 09:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:32.548 [2024-11-20 09:13:48.528102] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:32.548 09:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:32.806 09:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2537360 00:29:32.806 09:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:32.806 09:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:32.806 09:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2537360 /var/tmp/bdevperf.sock 00:29:32.806 09:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 2537360 ']' 00:29:32.806 09:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:32.806 09:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:32.806 09:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:32.806 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:32.806 09:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:32.806 09:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:32.806 [2024-11-20 09:13:48.792765] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:29:32.806 [2024-11-20 09:13:48.792813] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2537360 ] 00:29:33.065 [2024-11-20 09:13:48.852371] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.065 [2024-11-20 09:13:48.892989] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:33.065 09:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:33.065 09:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:29:33.065 09:13:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:33.324 Nvme0n1 00:29:33.324 09:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:33.586 [ 00:29:33.586 { 00:29:33.586 "name": "Nvme0n1", 00:29:33.586 "aliases": [ 00:29:33.586 "8bc19195-630a-493f-a294-a907e8f74db9" 00:29:33.586 ], 00:29:33.586 "product_name": "NVMe disk", 00:29:33.586 "block_size": 4096, 00:29:33.586 "num_blocks": 38912, 00:29:33.586 "uuid": "8bc19195-630a-493f-a294-a907e8f74db9", 00:29:33.586 "numa_id": 1, 00:29:33.586 "assigned_rate_limits": { 00:29:33.586 "rw_ios_per_sec": 0, 00:29:33.586 "rw_mbytes_per_sec": 0, 00:29:33.586 "r_mbytes_per_sec": 0, 00:29:33.586 "w_mbytes_per_sec": 0 00:29:33.586 }, 00:29:33.586 "claimed": false, 00:29:33.586 "zoned": false, 00:29:33.586 "supported_io_types": { 00:29:33.586 "read": true, 00:29:33.586 "write": true, 00:29:33.586 "unmap": true, 00:29:33.586 "flush": true, 00:29:33.586 "reset": true, 00:29:33.586 "nvme_admin": true, 00:29:33.586 "nvme_io": true, 00:29:33.586 "nvme_io_md": false, 00:29:33.586 "write_zeroes": true, 00:29:33.586 "zcopy": false, 00:29:33.586 "get_zone_info": false, 00:29:33.586 "zone_management": false, 00:29:33.586 "zone_append": false, 00:29:33.586 "compare": true, 00:29:33.586 "compare_and_write": true, 00:29:33.586 "abort": true, 00:29:33.586 "seek_hole": false, 00:29:33.586 "seek_data": false, 00:29:33.586 "copy": true, 00:29:33.586 "nvme_iov_md": false 00:29:33.586 }, 00:29:33.586 "memory_domains": [ 00:29:33.586 { 00:29:33.586 "dma_device_id": "system", 00:29:33.586 "dma_device_type": 1 00:29:33.586 } 00:29:33.586 ], 00:29:33.586 "driver_specific": { 00:29:33.586 "nvme": [ 00:29:33.586 { 00:29:33.586 "trid": { 00:29:33.586 "trtype": "TCP", 00:29:33.586 "adrfam": "IPv4", 00:29:33.586 "traddr": "10.0.0.2", 00:29:33.586 "trsvcid": "4420", 00:29:33.586 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:33.586 }, 00:29:33.586 "ctrlr_data": { 00:29:33.586 "cntlid": 1, 00:29:33.586 "vendor_id": "0x8086", 00:29:33.586 "model_number": "SPDK bdev Controller", 00:29:33.586 "serial_number": "SPDK0", 00:29:33.586 "firmware_revision": "25.01", 00:29:33.586 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:33.586 "oacs": { 00:29:33.586 "security": 0, 00:29:33.586 "format": 0, 00:29:33.586 "firmware": 0, 00:29:33.586 "ns_manage": 0 00:29:33.586 }, 00:29:33.586 "multi_ctrlr": true, 00:29:33.586 "ana_reporting": false 00:29:33.586 }, 00:29:33.586 "vs": { 00:29:33.586 "nvme_version": "1.3" 00:29:33.586 }, 00:29:33.586 "ns_data": { 00:29:33.586 "id": 1, 00:29:33.586 "can_share": true 00:29:33.586 } 00:29:33.586 } 00:29:33.586 ], 00:29:33.586 "mp_policy": "active_passive" 00:29:33.586 } 00:29:33.586 } 00:29:33.586 ] 00:29:33.586 09:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2537470 00:29:33.586 09:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:33.586 09:13:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:33.586 Running I/O for 10 seconds... 00:29:34.958 Latency(us) 00:29:34.958 [2024-11-20T08:13:50.999Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:34.958 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:34.958 Nvme0n1 : 1.00 21590.00 84.34 0.00 0.00 0.00 0.00 0.00 00:29:34.958 [2024-11-20T08:13:50.999Z] =================================================================================================================== 00:29:34.958 [2024-11-20T08:13:50.999Z] Total : 21590.00 84.34 0.00 0.00 0.00 0.00 0.00 00:29:34.958 00:29:35.524 09:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 42737f11-7e12-4fc8-b889-a5d1b22975da 00:29:35.783 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:35.783 Nvme0n1 : 2.00 22034.50 86.07 0.00 0.00 0.00 0.00 0.00 00:29:35.783 [2024-11-20T08:13:51.824Z] =================================================================================================================== 00:29:35.783 [2024-11-20T08:13:51.824Z] Total : 22034.50 86.07 0.00 0.00 0.00 0.00 0.00 00:29:35.783 00:29:35.783 true 00:29:35.783 09:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42737f11-7e12-4fc8-b889-a5d1b22975da 00:29:35.783 09:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:36.042 09:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:36.042 09:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:36.042 09:13:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2537470 00:29:36.609 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:36.609 Nvme0n1 : 3.00 22182.67 86.65 0.00 0.00 0.00 0.00 0.00 00:29:36.609 [2024-11-20T08:13:52.650Z] =================================================================================================================== 00:29:36.609 [2024-11-20T08:13:52.650Z] Total : 22182.67 86.65 0.00 0.00 0.00 0.00 0.00 00:29:36.609 00:29:37.544 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:37.544 Nvme0n1 : 4.00 22288.50 87.06 0.00 0.00 0.00 0.00 0.00 00:29:37.544 [2024-11-20T08:13:53.585Z] =================================================================================================================== 00:29:37.544 [2024-11-20T08:13:53.585Z] Total : 22288.50 87.06 0.00 0.00 0.00 0.00 0.00 00:29:37.544 00:29:38.919 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:38.919 Nvme0n1 : 5.00 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:29:38.919 [2024-11-20T08:13:54.960Z] =================================================================================================================== 00:29:38.919 [2024-11-20T08:13:54.960Z] Total : 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:29:38.919 00:29:39.854 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:39.854 Nvme0n1 : 6.00 22415.50 87.56 0.00 0.00 0.00 0.00 0.00 00:29:39.854 [2024-11-20T08:13:55.895Z] =================================================================================================================== 00:29:39.854 [2024-11-20T08:13:55.895Z] Total : 22415.50 87.56 0.00 0.00 0.00 0.00 0.00 00:29:39.854 00:29:40.790 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:40.790 Nvme0n1 : 7.00 22460.86 87.74 0.00 0.00 0.00 0.00 0.00 00:29:40.790 [2024-11-20T08:13:56.831Z] =================================================================================================================== 00:29:40.790 [2024-11-20T08:13:56.831Z] Total : 22460.86 87.74 0.00 0.00 0.00 0.00 0.00 00:29:40.790 00:29:41.723 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:41.723 Nvme0n1 : 8.00 22463.12 87.75 0.00 0.00 0.00 0.00 0.00 00:29:41.723 [2024-11-20T08:13:57.764Z] =================================================================================================================== 00:29:41.723 [2024-11-20T08:13:57.764Z] Total : 22463.12 87.75 0.00 0.00 0.00 0.00 0.00 00:29:41.723 00:29:42.656 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:42.656 Nvme0n1 : 9.00 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:29:42.656 [2024-11-20T08:13:58.697Z] =================================================================================================================== 00:29:42.656 [2024-11-20T08:13:58.697Z] Total : 22479.00 87.81 0.00 0.00 0.00 0.00 0.00 00:29:42.656 00:29:43.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:43.591 Nvme0n1 : 10.00 22504.40 87.91 0.00 0.00 0.00 0.00 0.00 00:29:43.591 [2024-11-20T08:13:59.632Z] =================================================================================================================== 00:29:43.591 [2024-11-20T08:13:59.632Z] Total : 22504.40 87.91 0.00 0.00 0.00 0.00 0.00 00:29:43.591 00:29:43.591 00:29:43.591 Latency(us) 00:29:43.591 [2024-11-20T08:13:59.632Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.591 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:43.591 Nvme0n1 : 10.00 22508.43 87.92 0.00 0.00 5683.64 5100.41 27468.13 00:29:43.591 [2024-11-20T08:13:59.632Z] =================================================================================================================== 00:29:43.591 [2024-11-20T08:13:59.632Z] Total : 22508.43 87.92 0.00 0.00 5683.64 5100.41 27468.13 00:29:43.591 { 00:29:43.591 "results": [ 00:29:43.591 { 00:29:43.592 "job": "Nvme0n1", 00:29:43.592 "core_mask": "0x2", 00:29:43.592 "workload": "randwrite", 00:29:43.592 "status": "finished", 00:29:43.592 "queue_depth": 128, 00:29:43.592 "io_size": 4096, 00:29:43.592 "runtime": 10.003896, 00:29:43.592 "iops": 22508.430715393282, 00:29:43.592 "mibps": 87.92355748200501, 00:29:43.592 "io_failed": 0, 00:29:43.592 "io_timeout": 0, 00:29:43.592 "avg_latency_us": 5683.64042457978, 00:29:43.592 "min_latency_us": 5100.410434782609, 00:29:43.592 "max_latency_us": 27468.132173913044 00:29:43.592 } 00:29:43.592 ], 00:29:43.592 "core_count": 1 00:29:43.592 } 00:29:43.592 09:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2537360 00:29:43.592 09:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 2537360 ']' 00:29:43.592 09:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 2537360 00:29:43.592 09:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:29:43.592 09:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:43.850 09:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2537360 00:29:43.850 09:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:43.850 09:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:43.850 09:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2537360' 00:29:43.850 killing process with pid 2537360 00:29:43.850 09:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 2537360 00:29:43.850 Received shutdown signal, test time was about 10.000000 seconds 00:29:43.850 00:29:43.850 Latency(us) 00:29:43.850 [2024-11-20T08:13:59.891Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:43.850 [2024-11-20T08:13:59.891Z] =================================================================================================================== 00:29:43.850 [2024-11-20T08:13:59.891Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:43.850 09:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 2537360 00:29:43.850 09:13:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:44.109 09:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:44.368 09:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42737f11-7e12-4fc8-b889-a5d1b22975da 00:29:44.368 09:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:29:44.627 09:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:29:44.627 09:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:29:44.627 09:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:44.627 [2024-11-20 09:14:00.639617] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:29:44.886 09:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42737f11-7e12-4fc8-b889-a5d1b22975da 00:29:44.886 09:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:29:44.886 09:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42737f11-7e12-4fc8-b889-a5d1b22975da 00:29:44.886 09:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:44.886 09:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:44.886 09:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:44.886 09:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:44.886 09:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:44.886 09:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:44.886 09:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:44.886 09:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:29:44.886 09:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42737f11-7e12-4fc8-b889-a5d1b22975da 00:29:44.886 request: 00:29:44.886 { 00:29:44.886 "uuid": "42737f11-7e12-4fc8-b889-a5d1b22975da", 00:29:44.886 "method": "bdev_lvol_get_lvstores", 00:29:44.886 "req_id": 1 00:29:44.886 } 00:29:44.886 Got JSON-RPC error response 00:29:44.886 response: 00:29:44.886 { 00:29:44.886 "code": -19, 00:29:44.886 "message": "No such device" 00:29:44.886 } 00:29:44.886 09:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:29:44.886 09:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:44.886 09:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:44.886 09:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:44.886 09:14:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:45.144 aio_bdev 00:29:45.144 09:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8bc19195-630a-493f-a294-a907e8f74db9 00:29:45.144 09:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=8bc19195-630a-493f-a294-a907e8f74db9 00:29:45.144 09:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:29:45.144 09:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:29:45.144 09:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:29:45.144 09:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:29:45.144 09:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:29:45.402 09:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8bc19195-630a-493f-a294-a907e8f74db9 -t 2000 00:29:45.661 [ 00:29:45.661 { 00:29:45.661 "name": "8bc19195-630a-493f-a294-a907e8f74db9", 00:29:45.661 "aliases": [ 00:29:45.661 "lvs/lvol" 00:29:45.661 ], 00:29:45.661 "product_name": "Logical Volume", 00:29:45.661 "block_size": 4096, 00:29:45.661 "num_blocks": 38912, 00:29:45.661 "uuid": "8bc19195-630a-493f-a294-a907e8f74db9", 00:29:45.661 "assigned_rate_limits": { 00:29:45.661 "rw_ios_per_sec": 0, 00:29:45.661 "rw_mbytes_per_sec": 0, 00:29:45.661 "r_mbytes_per_sec": 0, 00:29:45.661 "w_mbytes_per_sec": 0 00:29:45.661 }, 00:29:45.661 "claimed": false, 00:29:45.661 "zoned": false, 00:29:45.661 "supported_io_types": { 00:29:45.661 "read": true, 00:29:45.661 "write": true, 00:29:45.661 "unmap": true, 00:29:45.661 "flush": false, 00:29:45.661 "reset": true, 00:29:45.661 "nvme_admin": false, 00:29:45.661 "nvme_io": false, 00:29:45.661 "nvme_io_md": false, 00:29:45.661 "write_zeroes": true, 00:29:45.661 "zcopy": false, 00:29:45.661 "get_zone_info": false, 00:29:45.661 "zone_management": false, 00:29:45.661 "zone_append": false, 00:29:45.661 "compare": false, 00:29:45.661 "compare_and_write": false, 00:29:45.661 "abort": false, 00:29:45.661 "seek_hole": true, 00:29:45.661 "seek_data": true, 00:29:45.661 "copy": false, 00:29:45.661 "nvme_iov_md": false 00:29:45.661 }, 00:29:45.661 "driver_specific": { 00:29:45.661 "lvol": { 00:29:45.661 "lvol_store_uuid": "42737f11-7e12-4fc8-b889-a5d1b22975da", 00:29:45.661 "base_bdev": "aio_bdev", 00:29:45.661 "thin_provision": false, 00:29:45.661 "num_allocated_clusters": 38, 00:29:45.661 "snapshot": false, 00:29:45.661 "clone": false, 00:29:45.661 "esnap_clone": false 00:29:45.661 } 00:29:45.661 } 00:29:45.661 } 00:29:45.661 ] 00:29:45.661 09:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:29:45.661 09:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42737f11-7e12-4fc8-b889-a5d1b22975da 00:29:45.661 09:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:29:45.661 09:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:29:45.661 09:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 42737f11-7e12-4fc8-b889-a5d1b22975da 00:29:45.661 09:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:29:45.919 09:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:29:45.919 09:14:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8bc19195-630a-493f-a294-a907e8f74db9 00:29:46.177 09:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 42737f11-7e12-4fc8-b889-a5d1b22975da 00:29:46.436 09:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:29:46.436 09:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:46.695 00:29:46.695 real 0m15.764s 00:29:46.695 user 0m15.284s 00:29:46.695 sys 0m1.507s 00:29:46.695 09:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:46.695 09:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:29:46.695 ************************************ 00:29:46.695 END TEST lvs_grow_clean 00:29:46.695 ************************************ 00:29:46.695 09:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:29:46.695 09:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:29:46.695 09:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:46.695 09:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:29:46.695 ************************************ 00:29:46.695 START TEST lvs_grow_dirty 00:29:46.695 ************************************ 00:29:46.695 09:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:29:46.695 09:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:29:46.695 09:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:29:46.695 09:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:29:46.695 09:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:29:46.695 09:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:29:46.695 09:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:29:46.695 09:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:46.695 09:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:46.695 09:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:29:46.954 09:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:29:46.954 09:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:29:46.954 09:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=b255a4cd-92e9-4923-a34d-cfa80e6ee652 00:29:46.954 09:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b255a4cd-92e9-4923-a34d-cfa80e6ee652 00:29:46.954 09:14:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:29:47.213 09:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:29:47.213 09:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:29:47.213 09:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b255a4cd-92e9-4923-a34d-cfa80e6ee652 lvol 150 00:29:47.471 09:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=3e9e9336-ea9e-40ab-87ef-592e58beaa20 00:29:47.471 09:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:29:47.471 09:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:29:47.731 [2024-11-20 09:14:03.555546] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:29:47.731 [2024-11-20 09:14:03.555673] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:29:47.731 true 00:29:47.731 09:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b255a4cd-92e9-4923-a34d-cfa80e6ee652 00:29:47.731 09:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:29:47.990 09:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:29:47.990 09:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:47.990 09:14:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3e9e9336-ea9e-40ab-87ef-592e58beaa20 00:29:48.248 09:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:48.507 [2024-11-20 09:14:04.295981] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:48.507 09:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:48.507 09:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2539908 00:29:48.507 09:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:48.507 09:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:29:48.507 09:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2539908 /var/tmp/bdevperf.sock 00:29:48.507 09:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2539908 ']' 00:29:48.507 09:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:48.507 09:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:48.507 09:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:48.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:48.507 09:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:48.507 09:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:29:48.766 [2024-11-20 09:14:04.549095] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:29:48.766 [2024-11-20 09:14:04.549147] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2539908 ] 00:29:48.766 [2024-11-20 09:14:04.623934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:48.766 [2024-11-20 09:14:04.666570] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:48.766 09:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:48.766 09:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:29:48.766 09:14:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:29:49.025 Nvme0n1 00:29:49.025 09:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:29:49.284 [ 00:29:49.284 { 00:29:49.284 "name": "Nvme0n1", 00:29:49.284 "aliases": [ 00:29:49.284 "3e9e9336-ea9e-40ab-87ef-592e58beaa20" 00:29:49.284 ], 00:29:49.284 "product_name": "NVMe disk", 00:29:49.284 "block_size": 4096, 00:29:49.284 "num_blocks": 38912, 00:29:49.284 "uuid": "3e9e9336-ea9e-40ab-87ef-592e58beaa20", 00:29:49.284 "numa_id": 1, 00:29:49.284 "assigned_rate_limits": { 00:29:49.284 "rw_ios_per_sec": 0, 00:29:49.284 "rw_mbytes_per_sec": 0, 00:29:49.284 "r_mbytes_per_sec": 0, 00:29:49.284 "w_mbytes_per_sec": 0 00:29:49.284 }, 00:29:49.284 "claimed": false, 00:29:49.284 "zoned": false, 00:29:49.284 "supported_io_types": { 00:29:49.284 "read": true, 00:29:49.284 "write": true, 00:29:49.284 "unmap": true, 00:29:49.284 "flush": true, 00:29:49.284 "reset": true, 00:29:49.284 "nvme_admin": true, 00:29:49.284 "nvme_io": true, 00:29:49.284 "nvme_io_md": false, 00:29:49.284 "write_zeroes": true, 00:29:49.284 "zcopy": false, 00:29:49.284 "get_zone_info": false, 00:29:49.284 "zone_management": false, 00:29:49.284 "zone_append": false, 00:29:49.284 "compare": true, 00:29:49.284 "compare_and_write": true, 00:29:49.284 "abort": true, 00:29:49.284 "seek_hole": false, 00:29:49.284 "seek_data": false, 00:29:49.284 "copy": true, 00:29:49.284 "nvme_iov_md": false 00:29:49.284 }, 00:29:49.284 "memory_domains": [ 00:29:49.284 { 00:29:49.284 "dma_device_id": "system", 00:29:49.284 "dma_device_type": 1 00:29:49.284 } 00:29:49.284 ], 00:29:49.284 "driver_specific": { 00:29:49.284 "nvme": [ 00:29:49.284 { 00:29:49.284 "trid": { 00:29:49.284 "trtype": "TCP", 00:29:49.284 "adrfam": "IPv4", 00:29:49.284 "traddr": "10.0.0.2", 00:29:49.284 "trsvcid": "4420", 00:29:49.284 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:29:49.284 }, 00:29:49.284 "ctrlr_data": { 00:29:49.284 "cntlid": 1, 00:29:49.284 "vendor_id": "0x8086", 00:29:49.284 "model_number": "SPDK bdev Controller", 00:29:49.284 "serial_number": "SPDK0", 00:29:49.284 "firmware_revision": "25.01", 00:29:49.284 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:49.284 "oacs": { 00:29:49.284 "security": 0, 00:29:49.284 "format": 0, 00:29:49.284 "firmware": 0, 00:29:49.284 "ns_manage": 0 00:29:49.284 }, 00:29:49.284 "multi_ctrlr": true, 00:29:49.284 "ana_reporting": false 00:29:49.284 }, 00:29:49.284 "vs": { 00:29:49.284 "nvme_version": "1.3" 00:29:49.284 }, 00:29:49.284 "ns_data": { 00:29:49.284 "id": 1, 00:29:49.284 "can_share": true 00:29:49.284 } 00:29:49.284 } 00:29:49.284 ], 00:29:49.284 "mp_policy": "active_passive" 00:29:49.284 } 00:29:49.284 } 00:29:49.284 ] 00:29:49.284 09:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2540054 00:29:49.284 09:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:29:49.284 09:14:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:29:49.543 Running I/O for 10 seconds... 00:29:50.477 Latency(us) 00:29:50.477 [2024-11-20T08:14:06.518Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:50.477 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:50.477 Nvme0n1 : 1.00 21844.00 85.33 0.00 0.00 0.00 0.00 0.00 00:29:50.477 [2024-11-20T08:14:06.518Z] =================================================================================================================== 00:29:50.477 [2024-11-20T08:14:06.518Z] Total : 21844.00 85.33 0.00 0.00 0.00 0.00 0.00 00:29:50.477 00:29:51.414 09:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u b255a4cd-92e9-4923-a34d-cfa80e6ee652 00:29:51.414 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:51.414 Nvme0n1 : 2.00 22098.00 86.32 0.00 0.00 0.00 0.00 0.00 00:29:51.414 [2024-11-20T08:14:07.455Z] =================================================================================================================== 00:29:51.414 [2024-11-20T08:14:07.455Z] Total : 22098.00 86.32 0.00 0.00 0.00 0.00 0.00 00:29:51.414 00:29:51.414 true 00:29:51.414 09:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b255a4cd-92e9-4923-a34d-cfa80e6ee652 00:29:51.414 09:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:29:51.673 09:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:29:51.673 09:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:29:51.673 09:14:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2540054 00:29:52.608 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:52.608 Nvme0n1 : 3.00 22182.67 86.65 0.00 0.00 0.00 0.00 0.00 00:29:52.608 [2024-11-20T08:14:08.649Z] =================================================================================================================== 00:29:52.608 [2024-11-20T08:14:08.649Z] Total : 22182.67 86.65 0.00 0.00 0.00 0.00 0.00 00:29:52.608 00:29:53.545 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:53.545 Nvme0n1 : 4.00 22288.50 87.06 0.00 0.00 0.00 0.00 0.00 00:29:53.545 [2024-11-20T08:14:09.586Z] =================================================================================================================== 00:29:53.545 [2024-11-20T08:14:09.586Z] Total : 22288.50 87.06 0.00 0.00 0.00 0.00 0.00 00:29:53.545 00:29:54.481 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:54.481 Nvme0n1 : 5.00 22301.20 87.11 0.00 0.00 0.00 0.00 0.00 00:29:54.481 [2024-11-20T08:14:10.522Z] =================================================================================================================== 00:29:54.481 [2024-11-20T08:14:10.522Z] Total : 22301.20 87.11 0.00 0.00 0.00 0.00 0.00 00:29:54.481 00:29:55.418 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:55.418 Nvme0n1 : 6.00 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:29:55.418 [2024-11-20T08:14:11.459Z] =================================================================================================================== 00:29:55.418 [2024-11-20T08:14:11.459Z] Total : 22352.00 87.31 0.00 0.00 0.00 0.00 0.00 00:29:55.418 00:29:56.356 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:56.356 Nvme0n1 : 7.00 22388.29 87.45 0.00 0.00 0.00 0.00 0.00 00:29:56.356 [2024-11-20T08:14:12.397Z] =================================================================================================================== 00:29:56.356 [2024-11-20T08:14:12.397Z] Total : 22388.29 87.45 0.00 0.00 0.00 0.00 0.00 00:29:56.356 00:29:57.732 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:57.732 Nvme0n1 : 8.00 22415.50 87.56 0.00 0.00 0.00 0.00 0.00 00:29:57.732 [2024-11-20T08:14:13.773Z] =================================================================================================================== 00:29:57.732 [2024-11-20T08:14:13.773Z] Total : 22415.50 87.56 0.00 0.00 0.00 0.00 0.00 00:29:57.732 00:29:58.671 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:58.671 Nvme0n1 : 9.00 22450.78 87.70 0.00 0.00 0.00 0.00 0.00 00:29:58.671 [2024-11-20T08:14:14.712Z] =================================================================================================================== 00:29:58.671 [2024-11-20T08:14:14.712Z] Total : 22450.78 87.70 0.00 0.00 0.00 0.00 0.00 00:29:58.671 00:29:59.611 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:59.611 Nvme0n1 : 10.00 22469.70 87.77 0.00 0.00 0.00 0.00 0.00 00:29:59.611 [2024-11-20T08:14:15.652Z] =================================================================================================================== 00:29:59.611 [2024-11-20T08:14:15.652Z] Total : 22469.70 87.77 0.00 0.00 0.00 0.00 0.00 00:29:59.611 00:29:59.611 00:29:59.611 Latency(us) 00:29:59.611 [2024-11-20T08:14:15.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:59.611 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:29:59.611 Nvme0n1 : 10.01 22476.73 87.80 0.00 0.00 5691.48 3205.57 27696.08 00:29:59.611 [2024-11-20T08:14:15.652Z] =================================================================================================================== 00:29:59.611 [2024-11-20T08:14:15.652Z] Total : 22476.73 87.80 0.00 0.00 5691.48 3205.57 27696.08 00:29:59.611 { 00:29:59.611 "results": [ 00:29:59.611 { 00:29:59.611 "job": "Nvme0n1", 00:29:59.611 "core_mask": "0x2", 00:29:59.611 "workload": "randwrite", 00:29:59.611 "status": "finished", 00:29:59.611 "queue_depth": 128, 00:29:59.611 "io_size": 4096, 00:29:59.611 "runtime": 10.006705, 00:29:59.611 "iops": 22476.729352968836, 00:29:59.611 "mibps": 87.79972403503452, 00:29:59.611 "io_failed": 0, 00:29:59.611 "io_timeout": 0, 00:29:59.611 "avg_latency_us": 5691.482372095415, 00:29:59.611 "min_latency_us": 3205.5652173913045, 00:29:59.611 "max_latency_us": 27696.08347826087 00:29:59.611 } 00:29:59.611 ], 00:29:59.611 "core_count": 1 00:29:59.611 } 00:29:59.611 09:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2539908 00:29:59.611 09:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 2539908 ']' 00:29:59.611 09:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 2539908 00:29:59.611 09:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:29:59.611 09:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:59.611 09:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2539908 00:29:59.611 09:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:59.611 09:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:59.611 09:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2539908' 00:29:59.611 killing process with pid 2539908 00:29:59.611 09:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 2539908 00:29:59.611 Received shutdown signal, test time was about 10.000000 seconds 00:29:59.611 00:29:59.611 Latency(us) 00:29:59.611 [2024-11-20T08:14:15.652Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:59.611 [2024-11-20T08:14:15.652Z] =================================================================================================================== 00:29:59.611 [2024-11-20T08:14:15.652Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:29:59.611 09:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 2539908 00:29:59.611 09:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:59.870 09:14:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:00.128 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b255a4cd-92e9-4923-a34d-cfa80e6ee652 00:30:00.128 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:00.386 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:00.386 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:30:00.386 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2536961 00:30:00.386 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2536961 00:30:00.386 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2536961 Killed "${NVMF_APP[@]}" "$@" 00:30:00.386 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:30:00.386 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:30:00.387 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:30:00.387 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:00.387 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:00.387 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@328 -- # nvmfpid=2541872 00:30:00.387 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@329 -- # waitforlisten 2541872 00:30:00.387 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:00.387 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 2541872 ']' 00:30:00.387 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:00.387 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:00.387 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:00.387 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:00.387 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:00.387 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:00.387 [2024-11-20 09:14:16.324928] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:00.387 [2024-11-20 09:14:16.325828] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:30:00.387 [2024-11-20 09:14:16.325865] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:00.387 [2024-11-20 09:14:16.404971] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:00.646 [2024-11-20 09:14:16.444672] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:00.646 [2024-11-20 09:14:16.444703] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:00.646 [2024-11-20 09:14:16.444710] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:00.646 [2024-11-20 09:14:16.444716] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:00.646 [2024-11-20 09:14:16.444721] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:00.646 [2024-11-20 09:14:16.445265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:00.646 [2024-11-20 09:14:16.511928] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:00.646 [2024-11-20 09:14:16.512180] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:00.646 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:00.646 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:00.646 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:30:00.646 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:00.646 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:00.646 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:00.646 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:00.904 [2024-11-20 09:14:16.762633] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:00.904 [2024-11-20 09:14:16.762833] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:00.904 [2024-11-20 09:14:16.762916] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:00.904 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:30:00.904 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 3e9e9336-ea9e-40ab-87ef-592e58beaa20 00:30:00.904 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=3e9e9336-ea9e-40ab-87ef-592e58beaa20 00:30:00.904 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:00.904 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:00.904 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:00.904 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:00.904 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:01.163 09:14:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3e9e9336-ea9e-40ab-87ef-592e58beaa20 -t 2000 00:30:01.163 [ 00:30:01.163 { 00:30:01.163 "name": "3e9e9336-ea9e-40ab-87ef-592e58beaa20", 00:30:01.163 "aliases": [ 00:30:01.163 "lvs/lvol" 00:30:01.163 ], 00:30:01.163 "product_name": "Logical Volume", 00:30:01.163 "block_size": 4096, 00:30:01.163 "num_blocks": 38912, 00:30:01.163 "uuid": "3e9e9336-ea9e-40ab-87ef-592e58beaa20", 00:30:01.163 "assigned_rate_limits": { 00:30:01.163 "rw_ios_per_sec": 0, 00:30:01.163 "rw_mbytes_per_sec": 0, 00:30:01.163 "r_mbytes_per_sec": 0, 00:30:01.163 "w_mbytes_per_sec": 0 00:30:01.163 }, 00:30:01.163 "claimed": false, 00:30:01.163 "zoned": false, 00:30:01.163 "supported_io_types": { 00:30:01.163 "read": true, 00:30:01.163 "write": true, 00:30:01.163 "unmap": true, 00:30:01.163 "flush": false, 00:30:01.163 "reset": true, 00:30:01.163 "nvme_admin": false, 00:30:01.163 "nvme_io": false, 00:30:01.163 "nvme_io_md": false, 00:30:01.163 "write_zeroes": true, 00:30:01.163 "zcopy": false, 00:30:01.163 "get_zone_info": false, 00:30:01.163 "zone_management": false, 00:30:01.163 "zone_append": false, 00:30:01.163 "compare": false, 00:30:01.163 "compare_and_write": false, 00:30:01.163 "abort": false, 00:30:01.163 "seek_hole": true, 00:30:01.163 "seek_data": true, 00:30:01.163 "copy": false, 00:30:01.163 "nvme_iov_md": false 00:30:01.163 }, 00:30:01.163 "driver_specific": { 00:30:01.163 "lvol": { 00:30:01.163 "lvol_store_uuid": "b255a4cd-92e9-4923-a34d-cfa80e6ee652", 00:30:01.163 "base_bdev": "aio_bdev", 00:30:01.163 "thin_provision": false, 00:30:01.163 "num_allocated_clusters": 38, 00:30:01.163 "snapshot": false, 00:30:01.163 "clone": false, 00:30:01.163 "esnap_clone": false 00:30:01.163 } 00:30:01.163 } 00:30:01.163 } 00:30:01.163 ] 00:30:01.163 09:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:01.163 09:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b255a4cd-92e9-4923-a34d-cfa80e6ee652 00:30:01.163 09:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:30:01.421 09:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:30:01.421 09:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b255a4cd-92e9-4923-a34d-cfa80e6ee652 00:30:01.422 09:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:30:01.681 09:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:30:01.681 09:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:01.940 [2024-11-20 09:14:17.745749] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:01.940 09:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b255a4cd-92e9-4923-a34d-cfa80e6ee652 00:30:01.940 09:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:30:01.940 09:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b255a4cd-92e9-4923-a34d-cfa80e6ee652 00:30:01.940 09:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:01.940 09:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:01.940 09:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:01.940 09:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:01.940 09:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:01.940 09:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:01.940 09:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:01.940 09:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:01.940 09:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b255a4cd-92e9-4923-a34d-cfa80e6ee652 00:30:01.940 request: 00:30:01.940 { 00:30:01.940 "uuid": "b255a4cd-92e9-4923-a34d-cfa80e6ee652", 00:30:01.940 "method": "bdev_lvol_get_lvstores", 00:30:01.940 "req_id": 1 00:30:01.940 } 00:30:01.940 Got JSON-RPC error response 00:30:01.940 response: 00:30:01.940 { 00:30:01.940 "code": -19, 00:30:01.940 "message": "No such device" 00:30:01.940 } 00:30:02.200 09:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:30:02.200 09:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:02.200 09:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:02.200 09:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:02.200 09:14:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:02.200 aio_bdev 00:30:02.200 09:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 3e9e9336-ea9e-40ab-87ef-592e58beaa20 00:30:02.200 09:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=3e9e9336-ea9e-40ab-87ef-592e58beaa20 00:30:02.200 09:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:02.200 09:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:02.200 09:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:02.200 09:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:02.200 09:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:02.459 09:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 3e9e9336-ea9e-40ab-87ef-592e58beaa20 -t 2000 00:30:02.718 [ 00:30:02.718 { 00:30:02.718 "name": "3e9e9336-ea9e-40ab-87ef-592e58beaa20", 00:30:02.718 "aliases": [ 00:30:02.718 "lvs/lvol" 00:30:02.718 ], 00:30:02.718 "product_name": "Logical Volume", 00:30:02.718 "block_size": 4096, 00:30:02.718 "num_blocks": 38912, 00:30:02.718 "uuid": "3e9e9336-ea9e-40ab-87ef-592e58beaa20", 00:30:02.718 "assigned_rate_limits": { 00:30:02.718 "rw_ios_per_sec": 0, 00:30:02.718 "rw_mbytes_per_sec": 0, 00:30:02.718 "r_mbytes_per_sec": 0, 00:30:02.718 "w_mbytes_per_sec": 0 00:30:02.718 }, 00:30:02.718 "claimed": false, 00:30:02.718 "zoned": false, 00:30:02.718 "supported_io_types": { 00:30:02.718 "read": true, 00:30:02.718 "write": true, 00:30:02.718 "unmap": true, 00:30:02.718 "flush": false, 00:30:02.718 "reset": true, 00:30:02.718 "nvme_admin": false, 00:30:02.718 "nvme_io": false, 00:30:02.718 "nvme_io_md": false, 00:30:02.718 "write_zeroes": true, 00:30:02.718 "zcopy": false, 00:30:02.718 "get_zone_info": false, 00:30:02.718 "zone_management": false, 00:30:02.718 "zone_append": false, 00:30:02.718 "compare": false, 00:30:02.718 "compare_and_write": false, 00:30:02.718 "abort": false, 00:30:02.718 "seek_hole": true, 00:30:02.718 "seek_data": true, 00:30:02.718 "copy": false, 00:30:02.718 "nvme_iov_md": false 00:30:02.718 }, 00:30:02.718 "driver_specific": { 00:30:02.718 "lvol": { 00:30:02.718 "lvol_store_uuid": "b255a4cd-92e9-4923-a34d-cfa80e6ee652", 00:30:02.718 "base_bdev": "aio_bdev", 00:30:02.718 "thin_provision": false, 00:30:02.718 "num_allocated_clusters": 38, 00:30:02.718 "snapshot": false, 00:30:02.718 "clone": false, 00:30:02.718 "esnap_clone": false 00:30:02.718 } 00:30:02.718 } 00:30:02.718 } 00:30:02.718 ] 00:30:02.718 09:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:02.718 09:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b255a4cd-92e9-4923-a34d-cfa80e6ee652 00:30:02.718 09:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:02.977 09:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:02.977 09:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u b255a4cd-92e9-4923-a34d-cfa80e6ee652 00:30:02.977 09:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:02.977 09:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:02.977 09:14:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3e9e9336-ea9e-40ab-87ef-592e58beaa20 00:30:03.236 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b255a4cd-92e9-4923-a34d-cfa80e6ee652 00:30:03.494 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:03.753 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:03.753 00:30:03.753 real 0m17.032s 00:30:03.753 user 0m34.561s 00:30:03.753 sys 0m3.724s 00:30:03.753 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:03.753 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:03.753 ************************************ 00:30:03.753 END TEST lvs_grow_dirty 00:30:03.753 ************************************ 00:30:03.753 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:30:03.753 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:30:03.753 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:30:03.753 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:30:03.753 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:03.753 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:30:03.753 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:30:03.753 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:30:03.753 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:03.753 nvmf_trace.0 00:30:03.753 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:30:03.753 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:30:03.753 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@335 -- # nvmfcleanup 00:30:03.753 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@99 -- # sync 00:30:03.753 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:30:03.753 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@102 -- # set +e 00:30:03.753 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@103 -- # for i in {1..20} 00:30:03.753 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:30:03.753 rmmod nvme_tcp 00:30:03.753 rmmod nvme_fabrics 00:30:03.753 rmmod nvme_keyring 00:30:03.753 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:30:03.753 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@106 -- # set -e 00:30:03.753 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@107 -- # return 0 00:30:03.753 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # '[' -n 2541872 ']' 00:30:03.753 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@337 -- # killprocess 2541872 00:30:03.753 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 2541872 ']' 00:30:03.753 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 2541872 00:30:03.753 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:30:03.753 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:03.753 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2541872 00:30:04.012 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:04.012 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:04.012 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2541872' 00:30:04.012 killing process with pid 2541872 00:30:04.012 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 2541872 00:30:04.012 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 2541872 00:30:04.012 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:30:04.012 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@342 -- # nvmf_fini 00:30:04.012 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@264 -- # local dev 00:30:04.012 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@267 -- # remove_target_ns 00:30:04.012 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:04.012 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:04.012 09:14:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@268 -- # delete_main_bridge 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@130 -- # return 0 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # _dev=0 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # dev_map=() 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@284 -- # iptr 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@542 -- # iptables-save 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@542 -- # iptables-restore 00:30:06.548 00:30:06.548 real 0m42.086s 00:30:06.548 user 0m52.478s 00:30:06.548 sys 0m10.124s 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:06.548 ************************************ 00:30:06.548 END TEST nvmf_lvs_grow 00:30:06.548 ************************************ 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@24 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:06.548 ************************************ 00:30:06.548 START TEST nvmf_bdev_io_wait 00:30:06.548 ************************************ 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:06.548 * Looking for test storage... 00:30:06.548 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:06.548 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:06.548 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.548 --rc genhtml_branch_coverage=1 00:30:06.548 --rc genhtml_function_coverage=1 00:30:06.548 --rc genhtml_legend=1 00:30:06.549 --rc geninfo_all_blocks=1 00:30:06.549 --rc geninfo_unexecuted_blocks=1 00:30:06.549 00:30:06.549 ' 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:06.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.549 --rc genhtml_branch_coverage=1 00:30:06.549 --rc genhtml_function_coverage=1 00:30:06.549 --rc genhtml_legend=1 00:30:06.549 --rc geninfo_all_blocks=1 00:30:06.549 --rc geninfo_unexecuted_blocks=1 00:30:06.549 00:30:06.549 ' 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:06.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.549 --rc genhtml_branch_coverage=1 00:30:06.549 --rc genhtml_function_coverage=1 00:30:06.549 --rc genhtml_legend=1 00:30:06.549 --rc geninfo_all_blocks=1 00:30:06.549 --rc geninfo_unexecuted_blocks=1 00:30:06.549 00:30:06.549 ' 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:06.549 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:06.549 --rc genhtml_branch_coverage=1 00:30:06.549 --rc genhtml_function_coverage=1 00:30:06.549 --rc genhtml_legend=1 00:30:06.549 --rc geninfo_all_blocks=1 00:30:06.549 --rc geninfo_unexecuted_blocks=1 00:30:06.549 00:30:06.549 ' 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@50 -- # : 0 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # have_pci_nics=0 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # prepare_net_devs 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # local -g is_hw=no 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # remove_target_ns 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # xtrace_disable 00:30:06.549 09:14:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # pci_devs=() 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # local -a pci_devs 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # pci_net_devs=() 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # pci_drivers=() 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # local -A pci_drivers 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # net_devs=() 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # local -ga net_devs 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # e810=() 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # local -ga e810 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # x722=() 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # local -ga x722 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # mlx=() 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # local -ga mlx 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:11.970 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:11.970 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:11.970 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:11.971 Found net devices under 0000:86:00.0: cvl_0_0 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:11.971 Found net devices under 0000:86:00.1: cvl_0_1 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # is_hw=yes 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@257 -- # create_target_ns 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@27 -- # local -gA dev_map 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@28 -- # local -g _dev 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # ips=() 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:30:11.971 09:14:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772161 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:30:12.241 10.0.0.1 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772162 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:30:12.241 10.0.0.2 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:12.241 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@38 -- # ping_ips 1 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=initiator0 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:30:12.242 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:12.242 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.405 ms 00:30:12.242 00:30:12.242 --- 10.0.0.1 ping statistics --- 00:30:12.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.242 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target0 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target0 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:30:12.242 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:12.242 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:30:12.242 00:30:12.242 --- 10.0.0.2 ping statistics --- 00:30:12.242 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:12.242 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair++ )) 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # return 0 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=initiator0 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:30:12.242 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:30:12.501 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:30:12.501 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:30:12.501 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:12.501 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:30:12.501 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=initiator1 00:30:12.501 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:30:12.501 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:30:12.501 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # return 1 00:30:12.501 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev= 00:30:12.501 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@169 -- # return 0 00:30:12.501 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:30:12.501 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:30:12.501 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:30:12.501 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:12.501 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target0 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target0 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target1 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target1 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # return 1 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev= 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@169 -- # return 0 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # nvmfpid=2545960 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # waitforlisten 2545960 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 2545960 ']' 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:12.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:12.502 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:12.502 [2024-11-20 09:14:28.407836] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:12.502 [2024-11-20 09:14:28.408771] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:30:12.502 [2024-11-20 09:14:28.408805] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:12.502 [2024-11-20 09:14:28.485622] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:12.502 [2024-11-20 09:14:28.529393] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:12.502 [2024-11-20 09:14:28.529433] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:12.502 [2024-11-20 09:14:28.529440] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:12.502 [2024-11-20 09:14:28.529446] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:12.502 [2024-11-20 09:14:28.529453] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:12.502 [2024-11-20 09:14:28.530894] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:12.502 [2024-11-20 09:14:28.530934] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:12.502 [2024-11-20 09:14:28.531041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.502 [2024-11-20 09:14:28.531042] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:12.502 [2024-11-20 09:14:28.531457] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:12.761 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:12.761 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:30:12.761 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:30:12.761 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:12.761 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:12.761 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:12.761 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:12.761 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.761 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:12.761 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.761 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:12.761 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.761 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:12.761 [2024-11-20 09:14:28.652226] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:12.761 [2024-11-20 09:14:28.652786] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:12.761 [2024-11-20 09:14:28.653007] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:12.761 [2024-11-20 09:14:28.653145] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:12.761 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.761 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:12.761 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.761 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:12.761 [2024-11-20 09:14:28.663849] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:12.761 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.761 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:12.761 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.761 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:12.761 Malloc0 00:30:12.761 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.761 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:12.761 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.761 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:12.761 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:12.762 [2024-11-20 09:14:28.736093] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2545989 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2545991 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:30:12.762 { 00:30:12.762 "params": { 00:30:12.762 "name": "Nvme$subsystem", 00:30:12.762 "trtype": "$TEST_TRANSPORT", 00:30:12.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:12.762 "adrfam": "ipv4", 00:30:12.762 "trsvcid": "$NVMF_PORT", 00:30:12.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:12.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:12.762 "hdgst": ${hdgst:-false}, 00:30:12.762 "ddgst": ${ddgst:-false} 00:30:12.762 }, 00:30:12.762 "method": "bdev_nvme_attach_controller" 00:30:12.762 } 00:30:12.762 EOF 00:30:12.762 )") 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2545993 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:30:12.762 { 00:30:12.762 "params": { 00:30:12.762 "name": "Nvme$subsystem", 00:30:12.762 "trtype": "$TEST_TRANSPORT", 00:30:12.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:12.762 "adrfam": "ipv4", 00:30:12.762 "trsvcid": "$NVMF_PORT", 00:30:12.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:12.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:12.762 "hdgst": ${hdgst:-false}, 00:30:12.762 "ddgst": ${ddgst:-false} 00:30:12.762 }, 00:30:12.762 "method": "bdev_nvme_attach_controller" 00:30:12.762 } 00:30:12.762 EOF 00:30:12.762 )") 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2545996 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:30:12.762 { 00:30:12.762 "params": { 00:30:12.762 "name": "Nvme$subsystem", 00:30:12.762 "trtype": "$TEST_TRANSPORT", 00:30:12.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:12.762 "adrfam": "ipv4", 00:30:12.762 "trsvcid": "$NVMF_PORT", 00:30:12.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:12.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:12.762 "hdgst": ${hdgst:-false}, 00:30:12.762 "ddgst": ${ddgst:-false} 00:30:12.762 }, 00:30:12.762 "method": "bdev_nvme_attach_controller" 00:30:12.762 } 00:30:12.762 EOF 00:30:12.762 )") 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:30:12.762 { 00:30:12.762 "params": { 00:30:12.762 "name": "Nvme$subsystem", 00:30:12.762 "trtype": "$TEST_TRANSPORT", 00:30:12.762 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:12.762 "adrfam": "ipv4", 00:30:12.762 "trsvcid": "$NVMF_PORT", 00:30:12.762 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:12.762 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:12.762 "hdgst": ${hdgst:-false}, 00:30:12.762 "ddgst": ${ddgst:-false} 00:30:12.762 }, 00:30:12.762 "method": "bdev_nvme_attach_controller" 00:30:12.762 } 00:30:12.762 EOF 00:30:12.762 )") 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2545989 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:30:12.762 "params": { 00:30:12.762 "name": "Nvme1", 00:30:12.762 "trtype": "tcp", 00:30:12.762 "traddr": "10.0.0.2", 00:30:12.762 "adrfam": "ipv4", 00:30:12.762 "trsvcid": "4420", 00:30:12.762 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:12.762 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:12.762 "hdgst": false, 00:30:12.762 "ddgst": false 00:30:12.762 }, 00:30:12.762 "method": "bdev_nvme_attach_controller" 00:30:12.762 }' 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:30:12.762 "params": { 00:30:12.762 "name": "Nvme1", 00:30:12.762 "trtype": "tcp", 00:30:12.762 "traddr": "10.0.0.2", 00:30:12.762 "adrfam": "ipv4", 00:30:12.762 "trsvcid": "4420", 00:30:12.762 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:12.762 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:12.762 "hdgst": false, 00:30:12.762 "ddgst": false 00:30:12.762 }, 00:30:12.762 "method": "bdev_nvme_attach_controller" 00:30:12.762 }' 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:30:12.762 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:30:12.762 "params": { 00:30:12.762 "name": "Nvme1", 00:30:12.762 "trtype": "tcp", 00:30:12.762 "traddr": "10.0.0.2", 00:30:12.762 "adrfam": "ipv4", 00:30:12.762 "trsvcid": "4420", 00:30:12.763 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:12.763 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:12.763 "hdgst": false, 00:30:12.763 "ddgst": false 00:30:12.763 }, 00:30:12.763 "method": "bdev_nvme_attach_controller" 00:30:12.763 }' 00:30:12.763 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:30:12.763 09:14:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:30:12.763 "params": { 00:30:12.763 "name": "Nvme1", 00:30:12.763 "trtype": "tcp", 00:30:12.763 "traddr": "10.0.0.2", 00:30:12.763 "adrfam": "ipv4", 00:30:12.763 "trsvcid": "4420", 00:30:12.763 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:12.763 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:12.763 "hdgst": false, 00:30:12.763 "ddgst": false 00:30:12.763 }, 00:30:12.763 "method": "bdev_nvme_attach_controller" 00:30:12.763 }' 00:30:12.763 [2024-11-20 09:14:28.786013] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:30:12.763 [2024-11-20 09:14:28.786063] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:30:12.763 [2024-11-20 09:14:28.787526] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:30:12.763 [2024-11-20 09:14:28.787572] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:30:12.763 [2024-11-20 09:14:28.789828] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:30:12.763 [2024-11-20 09:14:28.789870] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:30:12.763 [2024-11-20 09:14:28.793365] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:30:12.763 [2024-11-20 09:14:28.793406] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:13.020 [2024-11-20 09:14:28.978148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.020 [2024-11-20 09:14:29.032354] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:13.020 [2024-11-20 09:14:29.032491] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.277 [2024-11-20 09:14:29.075529] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:13.277 [2024-11-20 09:14:29.093173] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.277 [2024-11-20 09:14:29.130762] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:13.277 [2024-11-20 09:14:29.190350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:13.277 [2024-11-20 09:14:29.242051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:13.277 Running I/O for 1 seconds... 00:30:13.277 Running I/O for 1 seconds... 00:30:13.534 Running I/O for 1 seconds... 00:30:13.535 Running I/O for 1 seconds... 00:30:14.487 11667.00 IOPS, 45.57 MiB/s 00:30:14.487 Latency(us) 00:30:14.487 [2024-11-20T08:14:30.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.487 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:14.487 Nvme1n1 : 1.01 11713.46 45.76 0.00 0.00 10888.89 3533.25 12252.38 00:30:14.487 [2024-11-20T08:14:30.528Z] =================================================================================================================== 00:30:14.487 [2024-11-20T08:14:30.528Z] Total : 11713.46 45.76 0.00 0.00 10888.89 3533.25 12252.38 00:30:14.487 9469.00 IOPS, 36.99 MiB/s 00:30:14.487 Latency(us) 00:30:14.487 [2024-11-20T08:14:30.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.487 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:14.487 Nvme1n1 : 1.01 9548.72 37.30 0.00 0.00 13361.54 1510.18 15728.64 00:30:14.487 [2024-11-20T08:14:30.528Z] =================================================================================================================== 00:30:14.487 [2024-11-20T08:14:30.528Z] Total : 9548.72 37.30 0.00 0.00 13361.54 1510.18 15728.64 00:30:14.487 11457.00 IOPS, 44.75 MiB/s 00:30:14.487 Latency(us) 00:30:14.487 [2024-11-20T08:14:30.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.487 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:14.487 Nvme1n1 : 1.00 11552.16 45.13 0.00 0.00 11056.40 2279.51 16868.40 00:30:14.487 [2024-11-20T08:14:30.528Z] =================================================================================================================== 00:30:14.487 [2024-11-20T08:14:30.528Z] Total : 11552.16 45.13 0.00 0.00 11056.40 2279.51 16868.40 00:30:14.487 245048.00 IOPS, 957.22 MiB/s 00:30:14.487 Latency(us) 00:30:14.487 [2024-11-20T08:14:30.528Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:14.487 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:14.487 Nvme1n1 : 1.00 244668.45 955.74 0.00 0.00 520.38 231.51 1538.67 00:30:14.487 [2024-11-20T08:14:30.528Z] =================================================================================================================== 00:30:14.487 [2024-11-20T08:14:30.528Z] Total : 244668.45 955.74 0.00 0.00 520.38 231.51 1538.67 00:30:14.487 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2545991 00:30:14.487 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2545993 00:30:14.487 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2545996 00:30:14.487 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:14.487 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:14.487 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:14.746 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:14.746 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:14.746 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:14.746 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # nvmfcleanup 00:30:14.746 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@99 -- # sync 00:30:14.746 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:30:14.746 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # set +e 00:30:14.746 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # for i in {1..20} 00:30:14.746 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:30:14.746 rmmod nvme_tcp 00:30:14.746 rmmod nvme_fabrics 00:30:14.746 rmmod nvme_keyring 00:30:14.746 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:30:14.746 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # set -e 00:30:14.746 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # return 0 00:30:14.746 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # '[' -n 2545960 ']' 00:30:14.746 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@337 -- # killprocess 2545960 00:30:14.746 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 2545960 ']' 00:30:14.746 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 2545960 00:30:14.746 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:30:14.746 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:14.746 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2545960 00:30:14.746 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:14.746 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:14.746 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2545960' 00:30:14.746 killing process with pid 2545960 00:30:14.746 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 2545960 00:30:14.746 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 2545960 00:30:15.005 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:30:15.005 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # nvmf_fini 00:30:15.005 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@264 -- # local dev 00:30:15.005 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@267 -- # remove_target_ns 00:30:15.005 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:15.005 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:15.005 09:14:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@268 -- # delete_main_bridge 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@130 -- # return 0 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # _dev=0 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # dev_map=() 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@284 -- # iptr 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@542 -- # iptables-save 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@542 -- # iptables-restore 00:30:16.908 00:30:16.908 real 0m10.779s 00:30:16.908 user 0m14.468s 00:30:16.908 sys 0m6.578s 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:16.908 ************************************ 00:30:16.908 END TEST nvmf_bdev_io_wait 00:30:16.908 ************************************ 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@25 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:16.908 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:17.168 ************************************ 00:30:17.168 START TEST nvmf_queue_depth 00:30:17.168 ************************************ 00:30:17.168 09:14:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:17.168 * Looking for test storage... 00:30:17.168 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:17.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.168 --rc genhtml_branch_coverage=1 00:30:17.168 --rc genhtml_function_coverage=1 00:30:17.168 --rc genhtml_legend=1 00:30:17.168 --rc geninfo_all_blocks=1 00:30:17.168 --rc geninfo_unexecuted_blocks=1 00:30:17.168 00:30:17.168 ' 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:17.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.168 --rc genhtml_branch_coverage=1 00:30:17.168 --rc genhtml_function_coverage=1 00:30:17.168 --rc genhtml_legend=1 00:30:17.168 --rc geninfo_all_blocks=1 00:30:17.168 --rc geninfo_unexecuted_blocks=1 00:30:17.168 00:30:17.168 ' 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:17.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.168 --rc genhtml_branch_coverage=1 00:30:17.168 --rc genhtml_function_coverage=1 00:30:17.168 --rc genhtml_legend=1 00:30:17.168 --rc geninfo_all_blocks=1 00:30:17.168 --rc geninfo_unexecuted_blocks=1 00:30:17.168 00:30:17.168 ' 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:17.168 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:17.168 --rc genhtml_branch_coverage=1 00:30:17.168 --rc genhtml_function_coverage=1 00:30:17.168 --rc genhtml_legend=1 00:30:17.168 --rc geninfo_all_blocks=1 00:30:17.168 --rc geninfo_unexecuted_blocks=1 00:30:17.168 00:30:17.168 ' 00:30:17.168 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@50 -- # : 0 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@54 -- # have_pci_nics=0 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@296 -- # prepare_net_devs 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # local -g is_hw=no 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@260 -- # remove_target_ns 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # xtrace_disable 00:30:17.169 09:14:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@131 -- # pci_devs=() 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@131 -- # local -a pci_devs 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@132 -- # pci_net_devs=() 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@133 -- # pci_drivers=() 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@133 -- # local -A pci_drivers 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@135 -- # net_devs=() 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@135 -- # local -ga net_devs 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@136 -- # e810=() 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@136 -- # local -ga e810 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@137 -- # x722=() 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@137 -- # local -ga x722 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@138 -- # mlx=() 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@138 -- # local -ga mlx 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:23.738 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:23.738 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:30:23.738 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:23.739 Found net devices under 0000:86:00.0: cvl_0_0 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:23.739 Found net devices under 0000:86:00.1: cvl_0_1 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # is_hw=yes 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@257 -- # create_target_ns 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@27 -- # local -gA dev_map 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@28 -- # local -g _dev 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@44 -- # ips=() 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772161 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:30:23.739 10.0.0.1 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772162 00:30:23.739 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:30:23.740 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:30:23.740 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:30:23.740 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:30:23.740 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:30:23.740 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:30:23.740 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:30:23.740 10.0.0.2 00:30:23.740 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:30:23.740 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:30:23.740 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:30:23.740 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:30:23.740 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:30:23.740 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:30:23.740 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:30:23.740 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:23.740 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:23.740 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:30:23.740 09:14:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@38 -- # ping_ips 1 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=initiator0 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:30:23.740 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:23.740 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.362 ms 00:30:23.740 00:30:23.740 --- 10.0.0.1 ping statistics --- 00:30:23.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.740 rtt min/avg/max/mdev = 0.362/0.362/0.362/0.000 ms 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target0 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target0 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:30:23.740 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:23.740 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.130 ms 00:30:23.740 00:30:23.740 --- 10.0.0.2 ping statistics --- 00:30:23.740 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:23.740 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair++ )) 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@270 -- # return 0 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:30:23.740 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=initiator0 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=initiator1 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # return 1 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev= 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@169 -- # return 0 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target0 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target0 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target1 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target1 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # return 1 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev= 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@169 -- # return 0 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # nvmfpid=2549799 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@329 -- # waitforlisten 2549799 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2549799 ']' 00:30:23.741 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:23.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:23.742 [2024-11-20 09:14:39.249560] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:23.742 [2024-11-20 09:14:39.250507] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:30:23.742 [2024-11-20 09:14:39.250541] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:23.742 [2024-11-20 09:14:39.331094] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.742 [2024-11-20 09:14:39.373002] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:23.742 [2024-11-20 09:14:39.373035] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:23.742 [2024-11-20 09:14:39.373042] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:23.742 [2024-11-20 09:14:39.373048] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:23.742 [2024-11-20 09:14:39.373053] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:23.742 [2024-11-20 09:14:39.373593] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:23.742 [2024-11-20 09:14:39.439517] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:23.742 [2024-11-20 09:14:39.439752] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:23.742 [2024-11-20 09:14:39.506320] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:23.742 Malloc0 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:23.742 [2024-11-20 09:14:39.578350] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2550004 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2550004 /var/tmp/bdevperf.sock 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 2550004 ']' 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:23.742 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:23.742 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:23.742 [2024-11-20 09:14:39.629827] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:30:23.742 [2024-11-20 09:14:39.629869] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2550004 ] 00:30:23.742 [2024-11-20 09:14:39.704196] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.742 [2024-11-20 09:14:39.745599] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:24.001 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:24.001 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:30:24.001 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:30:24.001 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:24.001 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:24.001 NVMe0n1 00:30:24.001 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:24.001 09:14:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:24.001 Running I/O for 10 seconds... 00:30:26.312 11284.00 IOPS, 44.08 MiB/s [2024-11-20T08:14:43.288Z] 11781.00 IOPS, 46.02 MiB/s [2024-11-20T08:14:44.223Z] 11946.00 IOPS, 46.66 MiB/s [2024-11-20T08:14:45.158Z] 12009.75 IOPS, 46.91 MiB/s [2024-11-20T08:14:46.093Z] 12062.40 IOPS, 47.12 MiB/s [2024-11-20T08:14:47.468Z] 12102.33 IOPS, 47.27 MiB/s [2024-11-20T08:14:48.035Z] 12140.43 IOPS, 47.42 MiB/s [2024-11-20T08:14:49.422Z] 12157.75 IOPS, 47.49 MiB/s [2024-11-20T08:14:50.358Z] 12189.00 IOPS, 47.61 MiB/s [2024-11-20T08:14:50.358Z] 12212.50 IOPS, 47.71 MiB/s 00:30:34.317 Latency(us) 00:30:34.317 [2024-11-20T08:14:50.358Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:34.317 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:30:34.317 Verification LBA range: start 0x0 length 0x4000 00:30:34.317 NVMe0n1 : 10.06 12247.41 47.84 0.00 0.00 83297.60 11739.49 53340.61 00:30:34.317 [2024-11-20T08:14:50.358Z] =================================================================================================================== 00:30:34.317 [2024-11-20T08:14:50.358Z] Total : 12247.41 47.84 0.00 0.00 83297.60 11739.49 53340.61 00:30:34.317 { 00:30:34.317 "results": [ 00:30:34.317 { 00:30:34.317 "job": "NVMe0n1", 00:30:34.317 "core_mask": "0x1", 00:30:34.317 "workload": "verify", 00:30:34.317 "status": "finished", 00:30:34.317 "verify_range": { 00:30:34.317 "start": 0, 00:30:34.317 "length": 16384 00:30:34.317 }, 00:30:34.317 "queue_depth": 1024, 00:30:34.317 "io_size": 4096, 00:30:34.317 "runtime": 10.055106, 00:30:34.317 "iops": 12247.409425619184, 00:30:34.317 "mibps": 47.84144306882494, 00:30:34.317 "io_failed": 0, 00:30:34.317 "io_timeout": 0, 00:30:34.317 "avg_latency_us": 83297.59551400972, 00:30:34.317 "min_latency_us": 11739.492173913044, 00:30:34.317 "max_latency_us": 53340.605217391305 00:30:34.317 } 00:30:34.317 ], 00:30:34.317 "core_count": 1 00:30:34.317 } 00:30:34.317 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2550004 00:30:34.317 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2550004 ']' 00:30:34.317 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2550004 00:30:34.317 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:34.317 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:34.317 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2550004 00:30:34.317 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:34.317 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:34.317 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2550004' 00:30:34.317 killing process with pid 2550004 00:30:34.317 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2550004 00:30:34.317 Received shutdown signal, test time was about 10.000000 seconds 00:30:34.317 00:30:34.318 Latency(us) 00:30:34.318 [2024-11-20T08:14:50.359Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:34.318 [2024-11-20T08:14:50.359Z] =================================================================================================================== 00:30:34.318 [2024-11-20T08:14:50.359Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:34.318 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2550004 00:30:34.318 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:30:34.318 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:30:34.318 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@335 -- # nvmfcleanup 00:30:34.318 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@99 -- # sync 00:30:34.318 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:30:34.318 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@102 -- # set +e 00:30:34.318 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@103 -- # for i in {1..20} 00:30:34.318 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:30:34.318 rmmod nvme_tcp 00:30:34.318 rmmod nvme_fabrics 00:30:34.577 rmmod nvme_keyring 00:30:34.577 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:30:34.577 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@106 -- # set -e 00:30:34.577 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@107 -- # return 0 00:30:34.577 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # '[' -n 2549799 ']' 00:30:34.577 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@337 -- # killprocess 2549799 00:30:34.577 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 2549799 ']' 00:30:34.577 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 2549799 00:30:34.577 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:30:34.577 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:34.577 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2549799 00:30:34.577 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:34.577 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:34.577 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2549799' 00:30:34.577 killing process with pid 2549799 00:30:34.577 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 2549799 00:30:34.577 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 2549799 00:30:34.836 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:30:34.836 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@342 -- # nvmf_fini 00:30:34.836 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@264 -- # local dev 00:30:34.836 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@267 -- # remove_target_ns 00:30:34.836 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:34.836 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:34.836 09:14:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:36.739 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@268 -- # delete_main_bridge 00:30:36.739 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:30:36.739 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@130 -- # return 0 00:30:36.740 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:30:36.740 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:30:36.740 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:30:36.740 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:30:36.740 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:30:36.740 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:30:36.740 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:30:36.740 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:30:36.740 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:30:36.740 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:30:36.740 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:30:36.740 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:30:36.740 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:30:36.740 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:30:36.740 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:30:36.740 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:30:36.740 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:30:36.740 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@41 -- # _dev=0 00:30:36.740 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@41 -- # dev_map=() 00:30:36.740 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@284 -- # iptr 00:30:36.740 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@542 -- # iptables-save 00:30:36.740 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:30:36.740 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@542 -- # iptables-restore 00:30:36.740 00:30:36.740 real 0m19.734s 00:30:36.740 user 0m22.705s 00:30:36.740 sys 0m6.302s 00:30:36.740 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:36.740 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:30:36.740 ************************************ 00:30:36.740 END TEST nvmf_queue_depth 00:30:36.740 ************************************ 00:30:36.740 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:36.740 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:36.740 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:36.740 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:36.740 ************************************ 00:30:36.740 START TEST nvmf_nmic 00:30:36.740 ************************************ 00:30:36.740 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:30:37.000 * Looking for test storage... 00:30:37.000 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:37.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.000 --rc genhtml_branch_coverage=1 00:30:37.000 --rc genhtml_function_coverage=1 00:30:37.000 --rc genhtml_legend=1 00:30:37.000 --rc geninfo_all_blocks=1 00:30:37.000 --rc geninfo_unexecuted_blocks=1 00:30:37.000 00:30:37.000 ' 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:37.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.000 --rc genhtml_branch_coverage=1 00:30:37.000 --rc genhtml_function_coverage=1 00:30:37.000 --rc genhtml_legend=1 00:30:37.000 --rc geninfo_all_blocks=1 00:30:37.000 --rc geninfo_unexecuted_blocks=1 00:30:37.000 00:30:37.000 ' 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:37.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.000 --rc genhtml_branch_coverage=1 00:30:37.000 --rc genhtml_function_coverage=1 00:30:37.000 --rc genhtml_legend=1 00:30:37.000 --rc geninfo_all_blocks=1 00:30:37.000 --rc geninfo_unexecuted_blocks=1 00:30:37.000 00:30:37.000 ' 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:37.000 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:37.000 --rc genhtml_branch_coverage=1 00:30:37.000 --rc genhtml_function_coverage=1 00:30:37.000 --rc genhtml_legend=1 00:30:37.000 --rc geninfo_all_blocks=1 00:30:37.000 --rc geninfo_unexecuted_blocks=1 00:30:37.000 00:30:37.000 ' 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.000 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@50 -- # : 0 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@54 -- # have_pci_nics=0 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@296 -- # prepare_net_devs 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # local -g is_hw=no 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@260 -- # remove_target_ns 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # xtrace_disable 00:30:37.001 09:14:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@131 -- # pci_devs=() 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@131 -- # local -a pci_devs 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@132 -- # pci_net_devs=() 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@133 -- # pci_drivers=() 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@133 -- # local -A pci_drivers 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@135 -- # net_devs=() 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@135 -- # local -ga net_devs 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@136 -- # e810=() 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@136 -- # local -ga e810 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@137 -- # x722=() 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@137 -- # local -ga x722 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@138 -- # mlx=() 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@138 -- # local -ga mlx 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:43.571 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:43.571 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:30:43.571 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:43.572 Found net devices under 0000:86:00.0: cvl_0_0 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:43.572 Found net devices under 0000:86:00.1: cvl_0_1 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # is_hw=yes 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@257 -- # create_target_ns 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@27 -- # local -gA dev_map 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@28 -- # local -g _dev 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@44 -- # ips=() 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772161 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:30:43.572 10.0.0.1 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772162 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:30:43.572 10.0.0.2 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:43.572 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@38 -- # ping_ips 1 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=initiator0 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:30:43.573 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:43.573 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.470 ms 00:30:43.573 00:30:43.573 --- 10.0.0.1 ping statistics --- 00:30:43.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.573 rtt min/avg/max/mdev = 0.470/0.470/0.470/0.000 ms 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target0 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target0 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:30:43.573 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:43.573 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.238 ms 00:30:43.573 00:30:43.573 --- 10.0.0.2 ping statistics --- 00:30:43.573 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:43.573 rtt min/avg/max/mdev = 0.238/0.238/0.238/0.000 ms 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair++ )) 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@270 -- # return 0 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=initiator0 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=initiator1 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # return 1 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # dev= 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@169 -- # return 0 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:30:43.573 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target0 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target0 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target1 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target1 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # return 1 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # dev= 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@169 -- # return 0 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # nvmfpid=2555173 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@329 -- # waitforlisten 2555173 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 2555173 ']' 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:43.574 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:43.574 09:14:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:43.574 [2024-11-20 09:14:59.032804] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:43.574 [2024-11-20 09:14:59.033792] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:30:43.574 [2024-11-20 09:14:59.033831] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:43.574 [2024-11-20 09:14:59.114421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:43.574 [2024-11-20 09:14:59.158331] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:43.574 [2024-11-20 09:14:59.158367] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:43.574 [2024-11-20 09:14:59.158375] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:43.574 [2024-11-20 09:14:59.158381] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:43.574 [2024-11-20 09:14:59.158386] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:43.574 [2024-11-20 09:14:59.159991] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:43.574 [2024-11-20 09:14:59.160036] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:43.574 [2024-11-20 09:14:59.160145] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:43.574 [2024-11-20 09:14:59.160146] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:43.574 [2024-11-20 09:14:59.229650] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:43.574 [2024-11-20 09:14:59.230130] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:43.574 [2024-11-20 09:14:59.230541] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:43.574 [2024-11-20 09:14:59.230962] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:43.574 [2024-11-20 09:14:59.230986] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:43.574 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:43.574 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:30:43.574 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:30:43.574 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:43.574 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:43.574 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:43.574 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:43.574 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.574 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:43.574 [2024-11-20 09:14:59.296983] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:43.574 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.574 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:43.574 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.574 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:43.574 Malloc0 00:30:43.574 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.574 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:43.574 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.574 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:43.574 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.574 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:43.574 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.574 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:43.574 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.574 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:43.574 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.574 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:43.574 [2024-11-20 09:14:59.377143] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:43.574 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.574 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:30:43.574 test case1: single bdev can't be used in multiple subsystems 00:30:43.574 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:30:43.575 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.575 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:43.575 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.575 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:30:43.575 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.575 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:43.575 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.575 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:30:43.575 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:30:43.575 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.575 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:43.575 [2024-11-20 09:14:59.412681] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:30:43.575 [2024-11-20 09:14:59.412706] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:30:43.575 [2024-11-20 09:14:59.412714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:30:43.575 request: 00:30:43.575 { 00:30:43.575 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:30:43.575 "namespace": { 00:30:43.575 "bdev_name": "Malloc0", 00:30:43.575 "no_auto_visible": false 00:30:43.575 }, 00:30:43.575 "method": "nvmf_subsystem_add_ns", 00:30:43.575 "req_id": 1 00:30:43.575 } 00:30:43.575 Got JSON-RPC error response 00:30:43.575 response: 00:30:43.575 { 00:30:43.575 "code": -32602, 00:30:43.575 "message": "Invalid parameters" 00:30:43.575 } 00:30:43.575 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:30:43.575 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:30:43.575 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:30:43.575 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:30:43.575 Adding namespace failed - expected result. 00:30:43.575 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:30:43.575 test case2: host connect to nvmf target in multiple paths 00:30:43.575 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:30:43.575 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:43.575 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:43.575 [2024-11-20 09:14:59.424766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:30:43.575 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:43.575 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:30:43.834 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:30:43.834 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:30:43.834 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:30:43.834 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:30:43.834 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:30:43.834 09:14:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:30:46.364 09:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:30:46.364 09:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:30:46.364 09:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:30:46.364 09:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:30:46.364 09:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:30:46.364 09:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:30:46.364 09:15:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:30:46.364 [global] 00:30:46.364 thread=1 00:30:46.364 invalidate=1 00:30:46.364 rw=write 00:30:46.364 time_based=1 00:30:46.364 runtime=1 00:30:46.364 ioengine=libaio 00:30:46.364 direct=1 00:30:46.364 bs=4096 00:30:46.364 iodepth=1 00:30:46.364 norandommap=0 00:30:46.364 numjobs=1 00:30:46.364 00:30:46.364 verify_dump=1 00:30:46.364 verify_backlog=512 00:30:46.364 verify_state_save=0 00:30:46.364 do_verify=1 00:30:46.364 verify=crc32c-intel 00:30:46.364 [job0] 00:30:46.364 filename=/dev/nvme0n1 00:30:46.364 Could not set queue depth (nvme0n1) 00:30:46.364 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:30:46.364 fio-3.35 00:30:46.364 Starting 1 thread 00:30:47.299 00:30:47.299 job0: (groupid=0, jobs=1): err= 0: pid=2555914: Wed Nov 20 09:15:03 2024 00:30:47.299 read: IOPS=1359, BW=5439KiB/s (5569kB/s)(5444KiB/1001msec) 00:30:47.299 slat (nsec): min=7160, max=39511, avg=8365.37, stdev=1931.37 00:30:47.299 clat (usec): min=173, max=41094, avg=526.62, stdev=3650.58 00:30:47.299 lat (usec): min=189, max=41113, avg=534.98, stdev=3651.09 00:30:47.299 clat percentiles (usec): 00:30:47.299 | 1.00th=[ 186], 5.00th=[ 188], 10.00th=[ 190], 20.00th=[ 190], 00:30:47.299 | 30.00th=[ 192], 40.00th=[ 194], 50.00th=[ 196], 60.00th=[ 196], 00:30:47.299 | 70.00th=[ 200], 80.00th=[ 202], 90.00th=[ 206], 95.00th=[ 210], 00:30:47.299 | 99.00th=[ 322], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:30:47.299 | 99.99th=[41157] 00:30:47.299 write: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec); 0 zone resets 00:30:47.299 slat (usec): min=10, max=27213, avg=29.46, stdev=694.06 00:30:47.299 clat (usec): min=125, max=251, avg=142.25, stdev= 9.22 00:30:47.299 lat (usec): min=141, max=27434, avg=171.71, stdev=696.14 00:30:47.299 clat percentiles (usec): 00:30:47.299 | 1.00th=[ 133], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 137], 00:30:47.299 | 30.00th=[ 139], 40.00th=[ 139], 50.00th=[ 141], 60.00th=[ 141], 00:30:47.299 | 70.00th=[ 143], 80.00th=[ 145], 90.00th=[ 151], 95.00th=[ 155], 00:30:47.299 | 99.00th=[ 184], 99.50th=[ 188], 99.90th=[ 239], 99.95th=[ 251], 00:30:47.299 | 99.99th=[ 251] 00:30:47.299 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:30:47.299 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:30:47.299 lat (usec) : 250=99.48%, 500=0.10% 00:30:47.299 lat (msec) : 2=0.03%, 50=0.38% 00:30:47.299 cpu : usr=2.60%, sys=4.30%, ctx=2900, majf=0, minf=1 00:30:47.299 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:30:47.299 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.299 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:30:47.299 issued rwts: total=1361,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:30:47.299 latency : target=0, window=0, percentile=100.00%, depth=1 00:30:47.299 00:30:47.299 Run status group 0 (all jobs): 00:30:47.299 READ: bw=5439KiB/s (5569kB/s), 5439KiB/s-5439KiB/s (5569kB/s-5569kB/s), io=5444KiB (5575kB), run=1001-1001msec 00:30:47.299 WRITE: bw=6138KiB/s (6285kB/s), 6138KiB/s-6138KiB/s (6285kB/s-6285kB/s), io=6144KiB (6291kB), run=1001-1001msec 00:30:47.299 00:30:47.299 Disk stats (read/write): 00:30:47.299 nvme0n1: ios=1384/1536, merge=0/0, ticks=1562/201, in_queue=1763, util=98.60% 00:30:47.299 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:30:47.557 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:30:47.557 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:30:47.557 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:30:47.557 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:30:47.557 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:47.557 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:30:47.557 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:30:47.557 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:30:47.557 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:30:47.557 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:30:47.557 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@335 -- # nvmfcleanup 00:30:47.557 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@99 -- # sync 00:30:47.557 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:30:47.557 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@102 -- # set +e 00:30:47.557 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@103 -- # for i in {1..20} 00:30:47.557 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:30:47.557 rmmod nvme_tcp 00:30:47.557 rmmod nvme_fabrics 00:30:47.557 rmmod nvme_keyring 00:30:47.557 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:30:47.557 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@106 -- # set -e 00:30:47.557 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@107 -- # return 0 00:30:47.557 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # '[' -n 2555173 ']' 00:30:47.557 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@337 -- # killprocess 2555173 00:30:47.557 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 2555173 ']' 00:30:47.557 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 2555173 00:30:47.816 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:30:47.816 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:47.816 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2555173 00:30:47.816 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:47.816 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:47.816 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2555173' 00:30:47.816 killing process with pid 2555173 00:30:47.816 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 2555173 00:30:47.816 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 2555173 00:30:47.816 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:30:47.816 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@342 -- # nvmf_fini 00:30:47.816 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@264 -- # local dev 00:30:47.816 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@267 -- # remove_target_ns 00:30:47.816 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:47.816 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:47.816 09:15:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@268 -- # delete_main_bridge 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@130 -- # return 0 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@41 -- # _dev=0 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@41 -- # dev_map=() 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@284 -- # iptr 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@542 -- # iptables-save 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@542 -- # iptables-restore 00:30:50.351 00:30:50.351 real 0m13.139s 00:30:50.351 user 0m23.380s 00:30:50.351 sys 0m6.174s 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:30:50.351 ************************************ 00:30:50.351 END TEST nvmf_nmic 00:30:50.351 ************************************ 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:50.351 ************************************ 00:30:50.351 START TEST nvmf_fio_target 00:30:50.351 ************************************ 00:30:50.351 09:15:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:30:50.351 * Looking for test storage... 00:30:50.351 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:50.351 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:50.351 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:30:50.351 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:50.351 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:50.351 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:50.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.352 --rc genhtml_branch_coverage=1 00:30:50.352 --rc genhtml_function_coverage=1 00:30:50.352 --rc genhtml_legend=1 00:30:50.352 --rc geninfo_all_blocks=1 00:30:50.352 --rc geninfo_unexecuted_blocks=1 00:30:50.352 00:30:50.352 ' 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:50.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.352 --rc genhtml_branch_coverage=1 00:30:50.352 --rc genhtml_function_coverage=1 00:30:50.352 --rc genhtml_legend=1 00:30:50.352 --rc geninfo_all_blocks=1 00:30:50.352 --rc geninfo_unexecuted_blocks=1 00:30:50.352 00:30:50.352 ' 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:50.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.352 --rc genhtml_branch_coverage=1 00:30:50.352 --rc genhtml_function_coverage=1 00:30:50.352 --rc genhtml_legend=1 00:30:50.352 --rc geninfo_all_blocks=1 00:30:50.352 --rc geninfo_unexecuted_blocks=1 00:30:50.352 00:30:50.352 ' 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:50.352 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:50.352 --rc genhtml_branch_coverage=1 00:30:50.352 --rc genhtml_function_coverage=1 00:30:50.352 --rc genhtml_legend=1 00:30:50.352 --rc geninfo_all_blocks=1 00:30:50.352 --rc geninfo_unexecuted_blocks=1 00:30:50.352 00:30:50.352 ' 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@50 -- # : 0 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:30:50.352 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:30:50.353 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:30:50.353 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:30:50.353 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:50.353 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:50.353 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:50.353 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:30:50.353 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:30:50.353 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:50.353 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:30:50.353 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:30:50.353 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@260 -- # remove_target_ns 00:30:50.353 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:50.353 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:50.353 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:50.353 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:30:50.353 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:30:50.353 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # xtrace_disable 00:30:50.353 09:15:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@131 -- # pci_devs=() 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@135 -- # net_devs=() 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@136 -- # e810=() 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@136 -- # local -ga e810 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@137 -- # x722=() 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@137 -- # local -ga x722 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@138 -- # mlx=() 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@138 -- # local -ga mlx 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:55.747 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:55.747 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:30:55.747 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:30:56.007 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:56.008 Found net devices under 0000:86:00.0: cvl_0_0 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:56.008 Found net devices under 0000:86:00.1: cvl_0_1 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # is_hw=yes 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@257 -- # create_target_ns 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@28 -- # local -g _dev 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@44 -- # ips=() 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772161 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:30:56.008 10.0.0.1 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772162 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:30:56.008 10.0.0.2 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:56.008 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:56.009 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:30:56.009 09:15:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:56.009 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:30:56.268 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:56.268 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.350 ms 00:30:56.268 00:30:56.268 --- 10.0.0.1 ping statistics --- 00:30:56.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:56.268 rtt min/avg/max/mdev = 0.350/0.350/0.350/0.000 ms 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target0 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:30:56.268 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:56.268 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.109 ms 00:30:56.268 00:30:56.268 --- 10.0.0.2 ping statistics --- 00:30:56.268 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:56.268 rtt min/avg/max/mdev = 0.109/0.109/0.109/0.000 ms 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair++ )) 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@270 -- # return 0 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:30:56.268 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=initiator1 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # return 1 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev= 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@169 -- # return 0 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target0 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target1 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target1 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # return 1 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev= 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@169 -- # return 0 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # nvmfpid=2560085 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@329 -- # waitforlisten 2560085 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 2560085 ']' 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:56.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:56.269 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:56.269 [2024-11-20 09:15:12.235213] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:56.269 [2024-11-20 09:15:12.236140] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:30:56.269 [2024-11-20 09:15:12.236175] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:56.528 [2024-11-20 09:15:12.314000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:56.528 [2024-11-20 09:15:12.357370] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:56.528 [2024-11-20 09:15:12.357406] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:56.528 [2024-11-20 09:15:12.357413] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:56.528 [2024-11-20 09:15:12.357420] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:56.528 [2024-11-20 09:15:12.357426] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:56.528 [2024-11-20 09:15:12.359051] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:56.528 [2024-11-20 09:15:12.359158] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:56.528 [2024-11-20 09:15:12.359183] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:56.528 [2024-11-20 09:15:12.359184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:56.528 [2024-11-20 09:15:12.427418] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:56.528 [2024-11-20 09:15:12.428102] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:56.528 [2024-11-20 09:15:12.428374] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:56.528 [2024-11-20 09:15:12.428742] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:56.528 [2024-11-20 09:15:12.428795] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:56.528 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:56.528 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:30:56.529 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:30:56.529 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:56.529 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:30:56.529 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:56.529 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:56.787 [2024-11-20 09:15:12.660009] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:56.787 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:57.045 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:30:57.045 09:15:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:57.304 09:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:30:57.304 09:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:57.564 09:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:30:57.564 09:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:57.564 09:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:30:57.564 09:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:30:57.823 09:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:58.082 09:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:30:58.082 09:15:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:58.340 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:30:58.340 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:30:58.599 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:30:58.599 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:30:58.599 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:30:58.858 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:58.858 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:59.117 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:30:59.117 09:15:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:30:59.375 09:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:59.375 [2024-11-20 09:15:15.331929] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:59.375 09:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:30:59.634 09:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:30:59.893 09:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:00.152 09:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:31:00.152 09:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:31:00.152 09:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:00.153 09:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:31:00.153 09:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:31:00.153 09:15:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:31:02.055 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:02.055 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:02.055 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:02.055 09:15:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:31:02.055 09:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:02.055 09:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:31:02.056 09:15:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:02.056 [global] 00:31:02.056 thread=1 00:31:02.056 invalidate=1 00:31:02.056 rw=write 00:31:02.056 time_based=1 00:31:02.056 runtime=1 00:31:02.056 ioengine=libaio 00:31:02.056 direct=1 00:31:02.056 bs=4096 00:31:02.056 iodepth=1 00:31:02.056 norandommap=0 00:31:02.056 numjobs=1 00:31:02.056 00:31:02.056 verify_dump=1 00:31:02.056 verify_backlog=512 00:31:02.056 verify_state_save=0 00:31:02.056 do_verify=1 00:31:02.056 verify=crc32c-intel 00:31:02.056 [job0] 00:31:02.056 filename=/dev/nvme0n1 00:31:02.056 [job1] 00:31:02.056 filename=/dev/nvme0n2 00:31:02.056 [job2] 00:31:02.056 filename=/dev/nvme0n3 00:31:02.056 [job3] 00:31:02.056 filename=/dev/nvme0n4 00:31:02.313 Could not set queue depth (nvme0n1) 00:31:02.313 Could not set queue depth (nvme0n2) 00:31:02.313 Could not set queue depth (nvme0n3) 00:31:02.313 Could not set queue depth (nvme0n4) 00:31:02.571 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:02.571 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:02.571 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:02.572 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:02.572 fio-3.35 00:31:02.572 Starting 4 threads 00:31:03.948 00:31:03.948 job0: (groupid=0, jobs=1): err= 0: pid=2561204: Wed Nov 20 09:15:19 2024 00:31:03.948 read: IOPS=21, BW=87.5KiB/s (89.6kB/s)(88.0KiB/1006msec) 00:31:03.948 slat (nsec): min=9578, max=28707, avg=22887.27, stdev=4201.40 00:31:03.948 clat (usec): min=40859, max=41146, avg=40978.66, stdev=74.93 00:31:03.948 lat (usec): min=40887, max=41155, avg=41001.54, stdev=72.51 00:31:03.948 clat percentiles (usec): 00:31:03.948 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:03.948 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:03.948 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:03.948 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:03.948 | 99.99th=[41157] 00:31:03.948 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:31:03.948 slat (nsec): min=9824, max=45561, avg=11172.07, stdev=2250.75 00:31:03.948 clat (usec): min=153, max=481, avg=183.09, stdev=22.76 00:31:03.948 lat (usec): min=164, max=492, avg=194.26, stdev=23.46 00:31:03.948 clat percentiles (usec): 00:31:03.948 | 1.00th=[ 161], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 174], 00:31:03.948 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 182], 60.00th=[ 184], 00:31:03.948 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 196], 95.00th=[ 202], 00:31:03.948 | 99.00th=[ 253], 99.50th=[ 379], 99.90th=[ 482], 99.95th=[ 482], 00:31:03.948 | 99.99th=[ 482] 00:31:03.948 bw ( KiB/s): min= 4096, max= 4096, per=50.30%, avg=4096.00, stdev= 0.00, samples=1 00:31:03.948 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:03.948 lat (usec) : 250=94.76%, 500=1.12% 00:31:03.948 lat (msec) : 50=4.12% 00:31:03.948 cpu : usr=0.60%, sys=0.30%, ctx=536, majf=0, minf=1 00:31:03.948 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:03.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.948 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.948 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:03.948 job1: (groupid=0, jobs=1): err= 0: pid=2561205: Wed Nov 20 09:15:19 2024 00:31:03.948 read: IOPS=21, BW=87.8KiB/s (89.9kB/s)(88.0KiB/1002msec) 00:31:03.948 slat (nsec): min=10576, max=24228, avg=21278.73, stdev=2469.49 00:31:03.948 clat (usec): min=40438, max=41047, avg=40948.09, stdev=123.65 00:31:03.948 lat (usec): min=40448, max=41069, avg=40969.37, stdev=125.85 00:31:03.948 clat percentiles (usec): 00:31:03.948 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:03.948 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:03.948 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:03.948 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:03.948 | 99.99th=[41157] 00:31:03.948 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:31:03.948 slat (nsec): min=11318, max=46361, avg=14173.34, stdev=4643.21 00:31:03.948 clat (usec): min=145, max=305, avg=179.19, stdev=13.42 00:31:03.948 lat (usec): min=163, max=350, avg=193.37, stdev=14.51 00:31:03.948 clat percentiles (usec): 00:31:03.948 | 1.00th=[ 155], 5.00th=[ 163], 10.00th=[ 165], 20.00th=[ 169], 00:31:03.948 | 30.00th=[ 174], 40.00th=[ 176], 50.00th=[ 178], 60.00th=[ 182], 00:31:03.948 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 194], 95.00th=[ 200], 00:31:03.948 | 99.00th=[ 215], 99.50th=[ 233], 99.90th=[ 306], 99.95th=[ 306], 00:31:03.948 | 99.99th=[ 306] 00:31:03.948 bw ( KiB/s): min= 4096, max= 4096, per=50.30%, avg=4096.00, stdev= 0.00, samples=1 00:31:03.948 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:03.948 lat (usec) : 250=95.51%, 500=0.37% 00:31:03.948 lat (msec) : 50=4.12% 00:31:03.948 cpu : usr=0.40%, sys=1.00%, ctx=534, majf=0, minf=1 00:31:03.948 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:03.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.948 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.948 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:03.948 job2: (groupid=0, jobs=1): err= 0: pid=2561206: Wed Nov 20 09:15:19 2024 00:31:03.948 read: IOPS=21, BW=87.5KiB/s (89.6kB/s)(88.0KiB/1006msec) 00:31:03.948 slat (nsec): min=9764, max=26003, avg=22956.77, stdev=3086.87 00:31:03.948 clat (usec): min=40516, max=41081, avg=40948.03, stdev=118.36 00:31:03.948 lat (usec): min=40526, max=41104, avg=40970.99, stdev=120.53 00:31:03.948 clat percentiles (usec): 00:31:03.948 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:03.948 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:03.948 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:03.948 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:03.948 | 99.99th=[41157] 00:31:03.948 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:31:03.948 slat (nsec): min=9176, max=30663, avg=12627.05, stdev=1885.84 00:31:03.948 clat (usec): min=137, max=346, avg=182.01, stdev=14.28 00:31:03.948 lat (usec): min=149, max=377, avg=194.64, stdev=14.99 00:31:03.948 clat percentiles (usec): 00:31:03.948 | 1.00th=[ 163], 5.00th=[ 167], 10.00th=[ 169], 20.00th=[ 174], 00:31:03.948 | 30.00th=[ 176], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 184], 00:31:03.948 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 196], 95.00th=[ 204], 00:31:03.948 | 99.00th=[ 221], 99.50th=[ 241], 99.90th=[ 347], 99.95th=[ 347], 00:31:03.948 | 99.99th=[ 347] 00:31:03.948 bw ( KiB/s): min= 4096, max= 4096, per=50.30%, avg=4096.00, stdev= 0.00, samples=1 00:31:03.948 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:03.948 lat (usec) : 250=95.51%, 500=0.37% 00:31:03.948 lat (msec) : 50=4.12% 00:31:03.948 cpu : usr=0.80%, sys=0.60%, ctx=537, majf=0, minf=1 00:31:03.948 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:03.948 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.948 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.948 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.948 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:03.948 job3: (groupid=0, jobs=1): err= 0: pid=2561207: Wed Nov 20 09:15:19 2024 00:31:03.948 read: IOPS=21, BW=87.8KiB/s (89.9kB/s)(88.0KiB/1002msec) 00:31:03.948 slat (nsec): min=10188, max=24853, avg=22309.45, stdev=2857.00 00:31:03.948 clat (usec): min=40773, max=41086, avg=40967.61, stdev=77.95 00:31:03.948 lat (usec): min=40795, max=41110, avg=40989.92, stdev=78.31 00:31:03.948 clat percentiles (usec): 00:31:03.948 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:03.948 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:03.948 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:03.948 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:03.948 | 99.99th=[41157] 00:31:03.948 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:31:03.948 slat (nsec): min=10092, max=45351, avg=11750.55, stdev=2514.35 00:31:03.948 clat (usec): min=141, max=286, avg=181.31, stdev=13.07 00:31:03.948 lat (usec): min=171, max=297, avg=193.06, stdev=13.42 00:31:03.948 clat percentiles (usec): 00:31:03.948 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 172], 00:31:03.948 | 30.00th=[ 174], 40.00th=[ 178], 50.00th=[ 180], 60.00th=[ 182], 00:31:03.948 | 70.00th=[ 186], 80.00th=[ 190], 90.00th=[ 198], 95.00th=[ 204], 00:31:03.948 | 99.00th=[ 223], 99.50th=[ 231], 99.90th=[ 285], 99.95th=[ 285], 00:31:03.948 | 99.99th=[ 285] 00:31:03.948 bw ( KiB/s): min= 4096, max= 4096, per=50.30%, avg=4096.00, stdev= 0.00, samples=1 00:31:03.948 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:03.948 lat (usec) : 250=95.69%, 500=0.19% 00:31:03.948 lat (msec) : 50=4.12% 00:31:03.949 cpu : usr=0.60%, sys=0.80%, ctx=534, majf=0, minf=2 00:31:03.949 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:03.949 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.949 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:03.949 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:03.949 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:03.949 00:31:03.949 Run status group 0 (all jobs): 00:31:03.949 READ: bw=350KiB/s (358kB/s), 87.5KiB/s-87.8KiB/s (89.6kB/s-89.9kB/s), io=352KiB (360kB), run=1002-1006msec 00:31:03.949 WRITE: bw=8143KiB/s (8339kB/s), 2036KiB/s-2044KiB/s (2085kB/s-2093kB/s), io=8192KiB (8389kB), run=1002-1006msec 00:31:03.949 00:31:03.949 Disk stats (read/write): 00:31:03.949 nvme0n1: ios=42/512, merge=0/0, ticks=1559/91, in_queue=1650, util=83.17% 00:31:03.949 nvme0n2: ios=67/512, merge=0/0, ticks=763/83, in_queue=846, util=88.74% 00:31:03.949 nvme0n3: ios=40/512, merge=0/0, ticks=1600/86, in_queue=1686, util=91.19% 00:31:03.949 nvme0n4: ios=74/512, merge=0/0, ticks=773/87, in_queue=860, util=95.38% 00:31:03.949 09:15:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:31:03.949 [global] 00:31:03.949 thread=1 00:31:03.949 invalidate=1 00:31:03.949 rw=randwrite 00:31:03.949 time_based=1 00:31:03.949 runtime=1 00:31:03.949 ioengine=libaio 00:31:03.949 direct=1 00:31:03.949 bs=4096 00:31:03.949 iodepth=1 00:31:03.949 norandommap=0 00:31:03.949 numjobs=1 00:31:03.949 00:31:03.949 verify_dump=1 00:31:03.949 verify_backlog=512 00:31:03.949 verify_state_save=0 00:31:03.949 do_verify=1 00:31:03.949 verify=crc32c-intel 00:31:03.949 [job0] 00:31:03.949 filename=/dev/nvme0n1 00:31:03.949 [job1] 00:31:03.949 filename=/dev/nvme0n2 00:31:03.949 [job2] 00:31:03.949 filename=/dev/nvme0n3 00:31:03.949 [job3] 00:31:03.949 filename=/dev/nvme0n4 00:31:03.949 Could not set queue depth (nvme0n1) 00:31:03.949 Could not set queue depth (nvme0n2) 00:31:03.949 Could not set queue depth (nvme0n3) 00:31:03.949 Could not set queue depth (nvme0n4) 00:31:03.949 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:03.949 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:03.949 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:03.949 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:03.949 fio-3.35 00:31:03.949 Starting 4 threads 00:31:05.322 00:31:05.322 job0: (groupid=0, jobs=1): err= 0: pid=2561584: Wed Nov 20 09:15:21 2024 00:31:05.322 read: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec) 00:31:05.322 slat (nsec): min=4195, max=27903, avg=7705.77, stdev=1290.47 00:31:05.322 clat (usec): min=183, max=29202, avg=286.72, stdev=643.64 00:31:05.322 lat (usec): min=190, max=29207, avg=294.43, stdev=643.54 00:31:05.322 clat percentiles (usec): 00:31:05.322 | 1.00th=[ 202], 5.00th=[ 208], 10.00th=[ 210], 20.00th=[ 215], 00:31:05.322 | 30.00th=[ 221], 40.00th=[ 229], 50.00th=[ 243], 60.00th=[ 249], 00:31:05.322 | 70.00th=[ 265], 80.00th=[ 392], 90.00th=[ 400], 95.00th=[ 408], 00:31:05.322 | 99.00th=[ 420], 99.50th=[ 445], 99.90th=[ 482], 99.95th=[ 545], 00:31:05.322 | 99.99th=[29230] 00:31:05.322 write: IOPS=2251, BW=9007KiB/s (9223kB/s)(9016KiB/1001msec); 0 zone resets 00:31:05.322 slat (nsec): min=4391, max=44763, avg=10054.06, stdev=2748.71 00:31:05.322 clat (usec): min=118, max=2609, avg=161.51, stdev=63.81 00:31:05.323 lat (usec): min=128, max=2625, avg=171.56, stdev=63.27 00:31:05.323 clat percentiles (usec): 00:31:05.323 | 1.00th=[ 124], 5.00th=[ 128], 10.00th=[ 130], 20.00th=[ 135], 00:31:05.323 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 149], 60.00th=[ 151], 00:31:05.323 | 70.00th=[ 157], 80.00th=[ 172], 90.00th=[ 233], 95.00th=[ 258], 00:31:05.323 | 99.00th=[ 273], 99.50th=[ 281], 99.90th=[ 310], 99.95th=[ 310], 00:31:05.323 | 99.99th=[ 2606] 00:31:05.323 bw ( KiB/s): min= 8192, max= 8192, per=36.55%, avg=8192.00, stdev= 0.00, samples=1 00:31:05.323 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:31:05.323 lat (usec) : 250=77.89%, 500=22.04%, 750=0.02% 00:31:05.323 lat (msec) : 4=0.02%, 50=0.02% 00:31:05.323 cpu : usr=2.10%, sys=3.90%, ctx=4305, majf=0, minf=1 00:31:05.323 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:05.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.323 issued rwts: total=2048,2254,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.323 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:05.323 job1: (groupid=0, jobs=1): err= 0: pid=2561591: Wed Nov 20 09:15:21 2024 00:31:05.323 read: IOPS=2239, BW=8959KiB/s (9174kB/s)(8968KiB/1001msec) 00:31:05.323 slat (nsec): min=7129, max=40087, avg=8321.90, stdev=1399.27 00:31:05.323 clat (usec): min=172, max=548, avg=235.48, stdev=35.56 00:31:05.323 lat (usec): min=180, max=556, avg=243.81, stdev=35.60 00:31:05.323 clat percentiles (usec): 00:31:05.323 | 1.00th=[ 196], 5.00th=[ 206], 10.00th=[ 210], 20.00th=[ 215], 00:31:05.323 | 30.00th=[ 219], 40.00th=[ 223], 50.00th=[ 227], 60.00th=[ 239], 00:31:05.323 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 258], 95.00th=[ 273], 00:31:05.323 | 99.00th=[ 445], 99.50th=[ 486], 99.90th=[ 515], 99.95th=[ 537], 00:31:05.323 | 99.99th=[ 545] 00:31:05.323 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:31:05.323 slat (nsec): min=10103, max=50166, avg=11500.73, stdev=2034.89 00:31:05.323 clat (usec): min=121, max=3007, avg=159.73, stdev=61.38 00:31:05.323 lat (usec): min=136, max=3027, avg=171.23, stdev=61.67 00:31:05.323 clat percentiles (usec): 00:31:05.323 | 1.00th=[ 129], 5.00th=[ 135], 10.00th=[ 137], 20.00th=[ 143], 00:31:05.323 | 30.00th=[ 147], 40.00th=[ 149], 50.00th=[ 153], 60.00th=[ 155], 00:31:05.323 | 70.00th=[ 159], 80.00th=[ 169], 90.00th=[ 192], 95.00th=[ 210], 00:31:05.323 | 99.00th=[ 251], 99.50th=[ 265], 99.90th=[ 293], 99.95th=[ 297], 00:31:05.323 | 99.99th=[ 2999] 00:31:05.323 bw ( KiB/s): min=10856, max=10856, per=48.44%, avg=10856.00, stdev= 0.00, samples=1 00:31:05.323 iops : min= 2714, max= 2714, avg=2714.00, stdev= 0.00, samples=1 00:31:05.323 lat (usec) : 250=91.44%, 500=8.45%, 750=0.08% 00:31:05.323 lat (msec) : 4=0.02% 00:31:05.323 cpu : usr=4.20%, sys=7.40%, ctx=4803, majf=0, minf=1 00:31:05.323 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:05.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.323 issued rwts: total=2242,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.323 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:05.323 job2: (groupid=0, jobs=1): err= 0: pid=2561608: Wed Nov 20 09:15:21 2024 00:31:05.323 read: IOPS=23, BW=96.0KiB/s (98.3kB/s)(100KiB/1042msec) 00:31:05.323 slat (nsec): min=9067, max=24504, avg=22049.72, stdev=4794.42 00:31:05.323 clat (usec): min=225, max=42106, avg=37730.39, stdev=11286.88 00:31:05.323 lat (usec): min=249, max=42130, avg=37752.44, stdev=11286.36 00:31:05.323 clat percentiles (usec): 00:31:05.323 | 1.00th=[ 227], 5.00th=[ 249], 10.00th=[40633], 20.00th=[40633], 00:31:05.323 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:05.323 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:05.323 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:05.323 | 99.99th=[42206] 00:31:05.323 write: IOPS=491, BW=1965KiB/s (2013kB/s)(2048KiB/1042msec); 0 zone resets 00:31:05.323 slat (nsec): min=4042, max=87653, avg=9240.13, stdev=5022.71 00:31:05.323 clat (usec): min=148, max=716, avg=178.45, stdev=30.57 00:31:05.323 lat (usec): min=152, max=804, avg=187.69, stdev=34.32 00:31:05.323 clat percentiles (usec): 00:31:05.323 | 1.00th=[ 153], 5.00th=[ 159], 10.00th=[ 161], 20.00th=[ 163], 00:31:05.323 | 30.00th=[ 167], 40.00th=[ 172], 50.00th=[ 174], 60.00th=[ 178], 00:31:05.323 | 70.00th=[ 184], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 206], 00:31:05.323 | 99.00th=[ 229], 99.50th=[ 330], 99.90th=[ 717], 99.95th=[ 717], 00:31:05.323 | 99.99th=[ 717] 00:31:05.323 bw ( KiB/s): min= 4096, max= 4096, per=18.28%, avg=4096.00, stdev= 0.00, samples=1 00:31:05.323 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:05.323 lat (usec) : 250=94.79%, 500=0.74%, 750=0.19% 00:31:05.323 lat (msec) : 50=4.28% 00:31:05.323 cpu : usr=0.67%, sys=0.10%, ctx=538, majf=0, minf=1 00:31:05.323 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:05.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.323 issued rwts: total=25,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.323 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:05.323 job3: (groupid=0, jobs=1): err= 0: pid=2561613: Wed Nov 20 09:15:21 2024 00:31:05.323 read: IOPS=21, BW=86.3KiB/s (88.3kB/s)(88.0KiB/1020msec) 00:31:05.323 slat (nsec): min=10478, max=25669, avg=24517.50, stdev=3165.66 00:31:05.323 clat (usec): min=40465, max=41954, avg=40995.67, stdev=247.55 00:31:05.323 lat (usec): min=40476, max=41979, avg=41020.19, stdev=249.06 00:31:05.323 clat percentiles (usec): 00:31:05.323 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:05.323 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:05.323 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:05.323 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:05.323 | 99.99th=[42206] 00:31:05.323 write: IOPS=501, BW=2008KiB/s (2056kB/s)(2048KiB/1020msec); 0 zone resets 00:31:05.323 slat (nsec): min=10913, max=50369, avg=12491.76, stdev=2713.46 00:31:05.323 clat (usec): min=146, max=3649, avg=212.11, stdev=186.90 00:31:05.323 lat (usec): min=157, max=3699, avg=224.60, stdev=188.37 00:31:05.323 clat percentiles (usec): 00:31:05.323 | 1.00th=[ 157], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 176], 00:31:05.323 | 30.00th=[ 182], 40.00th=[ 186], 50.00th=[ 192], 60.00th=[ 202], 00:31:05.323 | 70.00th=[ 208], 80.00th=[ 235], 90.00th=[ 243], 95.00th=[ 258], 00:31:05.323 | 99.00th=[ 302], 99.50th=[ 371], 99.90th=[ 3654], 99.95th=[ 3654], 00:31:05.323 | 99.99th=[ 3654] 00:31:05.323 bw ( KiB/s): min= 4096, max= 4096, per=18.28%, avg=4096.00, stdev= 0.00, samples=1 00:31:05.323 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:05.323 lat (usec) : 250=89.33%, 500=6.18% 00:31:05.323 lat (msec) : 4=0.37%, 50=4.12% 00:31:05.323 cpu : usr=0.10%, sys=1.37%, ctx=536, majf=0, minf=1 00:31:05.323 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:05.323 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.323 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:05.323 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:05.323 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:05.323 00:31:05.323 Run status group 0 (all jobs): 00:31:05.323 READ: bw=16.3MiB/s (17.0MB/s), 86.3KiB/s-8959KiB/s (88.3kB/s-9174kB/s), io=16.9MiB (17.8MB), run=1001-1042msec 00:31:05.323 WRITE: bw=21.9MiB/s (22.9MB/s), 1965KiB/s-9.99MiB/s (2013kB/s-10.5MB/s), io=22.8MiB (23.9MB), run=1001-1042msec 00:31:05.323 00:31:05.323 Disk stats (read/write): 00:31:05.323 nvme0n1: ios=1615/2048, merge=0/0, ticks=519/322, in_queue=841, util=85.77% 00:31:05.323 nvme0n2: ios=1993/2048, merge=0/0, ticks=1343/308, in_queue=1651, util=89.64% 00:31:05.323 nvme0n3: ios=76/512, merge=0/0, ticks=1711/92, in_queue=1803, util=93.44% 00:31:05.323 nvme0n4: ios=42/512, merge=0/0, ticks=1605/105, in_queue=1710, util=94.43% 00:31:05.323 09:15:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:31:05.323 [global] 00:31:05.323 thread=1 00:31:05.323 invalidate=1 00:31:05.323 rw=write 00:31:05.323 time_based=1 00:31:05.323 runtime=1 00:31:05.323 ioengine=libaio 00:31:05.323 direct=1 00:31:05.323 bs=4096 00:31:05.323 iodepth=128 00:31:05.323 norandommap=0 00:31:05.323 numjobs=1 00:31:05.323 00:31:05.323 verify_dump=1 00:31:05.323 verify_backlog=512 00:31:05.323 verify_state_save=0 00:31:05.323 do_verify=1 00:31:05.323 verify=crc32c-intel 00:31:05.323 [job0] 00:31:05.323 filename=/dev/nvme0n1 00:31:05.323 [job1] 00:31:05.323 filename=/dev/nvme0n2 00:31:05.323 [job2] 00:31:05.323 filename=/dev/nvme0n3 00:31:05.323 [job3] 00:31:05.323 filename=/dev/nvme0n4 00:31:05.323 Could not set queue depth (nvme0n1) 00:31:05.324 Could not set queue depth (nvme0n2) 00:31:05.324 Could not set queue depth (nvme0n3) 00:31:05.324 Could not set queue depth (nvme0n4) 00:31:05.581 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:05.581 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:05.581 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:05.581 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:05.581 fio-3.35 00:31:05.581 Starting 4 threads 00:31:06.957 00:31:06.957 job0: (groupid=0, jobs=1): err= 0: pid=2562030: Wed Nov 20 09:15:22 2024 00:31:06.957 read: IOPS=5335, BW=20.8MiB/s (21.9MB/s)(20.9MiB/1004msec) 00:31:06.957 slat (nsec): min=1457, max=11342k, avg=93291.72, stdev=810789.75 00:31:06.957 clat (usec): min=850, max=32655, avg=11771.08, stdev=4452.51 00:31:06.957 lat (usec): min=1295, max=32680, avg=11864.37, stdev=4521.43 00:31:06.957 clat percentiles (usec): 00:31:06.957 | 1.00th=[ 2008], 5.00th=[ 4015], 10.00th=[ 7570], 20.00th=[ 9372], 00:31:06.957 | 30.00th=[10290], 40.00th=[11076], 50.00th=[11338], 60.00th=[11469], 00:31:06.957 | 70.00th=[11863], 80.00th=[12518], 90.00th=[19530], 95.00th=[21890], 00:31:06.957 | 99.00th=[24511], 99.50th=[24511], 99.90th=[26870], 99.95th=[31589], 00:31:06.957 | 99.99th=[32637] 00:31:06.957 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:31:06.957 slat (usec): min=2, max=9628, avg=72.65, stdev=518.04 00:31:06.957 clat (usec): min=288, max=53606, avg=11426.72, stdev=7234.42 00:31:06.957 lat (usec): min=304, max=53609, avg=11499.37, stdev=7254.12 00:31:06.957 clat percentiles (usec): 00:31:06.957 | 1.00th=[ 930], 5.00th=[ 2704], 10.00th=[ 5866], 20.00th=[ 8029], 00:31:06.957 | 30.00th=[ 9372], 40.00th=[10159], 50.00th=[10814], 60.00th=[11469], 00:31:06.957 | 70.00th=[11994], 80.00th=[12387], 90.00th=[15008], 95.00th=[19006], 00:31:06.957 | 99.00th=[47449], 99.50th=[49021], 99.90th=[53740], 99.95th=[53740], 00:31:06.957 | 99.99th=[53740] 00:31:06.957 bw ( KiB/s): min=21608, max=23448, per=30.10%, avg=22528.00, stdev=1301.08, samples=2 00:31:06.957 iops : min= 5402, max= 5862, avg=5632.00, stdev=325.27, samples=2 00:31:06.957 lat (usec) : 500=0.05%, 750=0.11%, 1000=0.60% 00:31:06.957 lat (msec) : 2=1.52%, 4=3.58%, 10=26.48%, 20=60.72%, 50=6.74% 00:31:06.957 lat (msec) : 100=0.21% 00:31:06.957 cpu : usr=4.39%, sys=6.28%, ctx=468, majf=0, minf=2 00:31:06.957 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:31:06.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:06.957 issued rwts: total=5357,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:06.957 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:06.957 job1: (groupid=0, jobs=1): err= 0: pid=2562043: Wed Nov 20 09:15:22 2024 00:31:06.957 read: IOPS=5603, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1005msec) 00:31:06.957 slat (nsec): min=1675, max=11539k, avg=88492.15, stdev=614035.71 00:31:06.957 clat (usec): min=6280, max=20611, avg=11601.52, stdev=2240.49 00:31:06.957 lat (usec): min=6289, max=20638, avg=11690.01, stdev=2281.23 00:31:06.957 clat percentiles (usec): 00:31:06.957 | 1.00th=[ 6980], 5.00th=[ 8225], 10.00th=[ 8979], 20.00th=[ 9634], 00:31:06.957 | 30.00th=[10421], 40.00th=[10814], 50.00th=[11469], 60.00th=[11994], 00:31:06.957 | 70.00th=[12780], 80.00th=[13304], 90.00th=[14353], 95.00th=[15926], 00:31:06.957 | 99.00th=[17957], 99.50th=[18482], 99.90th=[19792], 99.95th=[19792], 00:31:06.957 | 99.99th=[20579] 00:31:06.957 write: IOPS=5698, BW=22.3MiB/s (23.3MB/s)(22.4MiB/1005msec); 0 zone resets 00:31:06.957 slat (usec): min=2, max=9568, avg=79.62, stdev=543.15 00:31:06.957 clat (usec): min=807, max=21627, avg=10822.70, stdev=2300.58 00:31:06.957 lat (usec): min=818, max=21648, avg=10902.32, stdev=2351.55 00:31:06.957 clat percentiles (usec): 00:31:06.957 | 1.00th=[ 4948], 5.00th=[ 7570], 10.00th=[ 8356], 20.00th=[ 9503], 00:31:06.957 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10421], 60.00th=[11338], 00:31:06.957 | 70.00th=[11863], 80.00th=[12125], 90.00th=[13304], 95.00th=[15008], 00:31:06.957 | 99.00th=[18482], 99.50th=[18482], 99.90th=[18744], 99.95th=[20579], 00:31:06.957 | 99.99th=[21627] 00:31:06.957 bw ( KiB/s): min=20824, max=24232, per=30.10%, avg=22528.00, stdev=2409.82, samples=2 00:31:06.957 iops : min= 5206, max= 6058, avg=5632.00, stdev=602.45, samples=2 00:31:06.957 lat (usec) : 1000=0.06% 00:31:06.957 lat (msec) : 2=0.01%, 4=0.33%, 10=30.08%, 20=69.49%, 50=0.04% 00:31:06.957 cpu : usr=4.58%, sys=7.17%, ctx=448, majf=0, minf=1 00:31:06.957 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:31:06.957 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.957 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:06.957 issued rwts: total=5632,5727,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:06.957 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:06.957 job2: (groupid=0, jobs=1): err= 0: pid=2562060: Wed Nov 20 09:15:22 2024 00:31:06.957 read: IOPS=3934, BW=15.4MiB/s (16.1MB/s)(15.4MiB/1005msec) 00:31:06.957 slat (nsec): min=1079, max=12556k, avg=104294.48, stdev=722198.37 00:31:06.957 clat (usec): min=1013, max=33315, avg=14027.19, stdev=3256.26 00:31:06.957 lat (usec): min=6164, max=33330, avg=14131.48, stdev=3286.89 00:31:06.957 clat percentiles (usec): 00:31:06.958 | 1.00th=[ 8455], 5.00th=[ 9896], 10.00th=[10683], 20.00th=[11731], 00:31:06.958 | 30.00th=[12256], 40.00th=[12649], 50.00th=[13173], 60.00th=[13829], 00:31:06.958 | 70.00th=[14615], 80.00th=[16188], 90.00th=[18482], 95.00th=[20579], 00:31:06.958 | 99.00th=[24773], 99.50th=[24773], 99.90th=[25822], 99.95th=[29754], 00:31:06.958 | 99.99th=[33424] 00:31:06.958 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:31:06.958 slat (usec): min=2, max=11758, avg=127.97, stdev=754.84 00:31:06.958 clat (msec): min=3, max=115, avg=17.51, stdev=15.15 00:31:06.958 lat (msec): min=3, max=115, avg=17.64, stdev=15.25 00:31:06.958 clat percentiles (msec): 00:31:06.958 | 1.00th=[ 7], 5.00th=[ 11], 10.00th=[ 12], 20.00th=[ 13], 00:31:06.958 | 30.00th=[ 14], 40.00th=[ 14], 50.00th=[ 14], 60.00th=[ 14], 00:31:06.958 | 70.00th=[ 14], 80.00th=[ 18], 90.00th=[ 20], 95.00th=[ 45], 00:31:06.958 | 99.00th=[ 104], 99.50th=[ 111], 99.90th=[ 116], 99.95th=[ 116], 00:31:06.958 | 99.99th=[ 116] 00:31:06.958 bw ( KiB/s): min=13448, max=19320, per=21.89%, avg=16384.00, stdev=4152.13, samples=2 00:31:06.958 iops : min= 3362, max= 4830, avg=4096.00, stdev=1038.03, samples=2 00:31:06.958 lat (msec) : 2=0.01%, 4=0.04%, 10=4.32%, 20=86.75%, 50=6.60% 00:31:06.958 lat (msec) : 100=1.70%, 250=0.58% 00:31:06.958 cpu : usr=3.19%, sys=5.18%, ctx=402, majf=0, minf=1 00:31:06.958 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:06.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:06.958 issued rwts: total=3954,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:06.958 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:06.958 job3: (groupid=0, jobs=1): err= 0: pid=2562066: Wed Nov 20 09:15:22 2024 00:31:06.958 read: IOPS=3889, BW=15.2MiB/s (15.9MB/s)(15.9MiB/1045msec) 00:31:06.958 slat (nsec): min=1051, max=17160k, avg=112818.04, stdev=806097.67 00:31:06.958 clat (usec): min=4161, max=73147, avg=16378.56, stdev=10314.31 00:31:06.958 lat (usec): min=4172, max=73165, avg=16491.37, stdev=10339.52 00:31:06.958 clat percentiles (usec): 00:31:06.958 | 1.00th=[ 7439], 5.00th=[ 9896], 10.00th=[10421], 20.00th=[12256], 00:31:06.958 | 30.00th=[12911], 40.00th=[13304], 50.00th=[13435], 60.00th=[13960], 00:31:06.958 | 70.00th=[15139], 80.00th=[17171], 90.00th=[20055], 95.00th=[45876], 00:31:06.958 | 99.00th=[70779], 99.50th=[72877], 99.90th=[72877], 99.95th=[72877], 00:31:06.958 | 99.99th=[72877] 00:31:06.958 write: IOPS=3919, BW=15.3MiB/s (16.1MB/s)(16.0MiB/1045msec); 0 zone resets 00:31:06.958 slat (usec): min=2, max=13444, avg=125.54, stdev=830.79 00:31:06.958 clat (usec): min=395, max=54947, avg=15722.53, stdev=6086.27 00:31:06.958 lat (usec): min=422, max=62114, avg=15848.07, stdev=6183.68 00:31:06.958 clat percentiles (usec): 00:31:06.958 | 1.00th=[ 5800], 5.00th=[ 9110], 10.00th=[11469], 20.00th=[12780], 00:31:06.958 | 30.00th=[13304], 40.00th=[13566], 50.00th=[13829], 60.00th=[13960], 00:31:06.958 | 70.00th=[15139], 80.00th=[19268], 90.00th=[22676], 95.00th=[27657], 00:31:06.958 | 99.00th=[34866], 99.50th=[45876], 99.90th=[54789], 99.95th=[54789], 00:31:06.958 | 99.99th=[54789] 00:31:06.958 bw ( KiB/s): min=16384, max=16384, per=21.89%, avg=16384.00, stdev= 0.00, samples=2 00:31:06.958 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:31:06.958 lat (usec) : 500=0.02% 00:31:06.958 lat (msec) : 2=0.10%, 4=0.22%, 10=5.56%, 20=79.81%, 50=12.09% 00:31:06.958 lat (msec) : 100=2.19% 00:31:06.958 cpu : usr=2.49%, sys=5.17%, ctx=347, majf=0, minf=1 00:31:06.958 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:06.958 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:06.958 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:06.958 issued rwts: total=4065,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:06.958 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:06.958 00:31:06.958 Run status group 0 (all jobs): 00:31:06.958 READ: bw=71.1MiB/s (74.5MB/s), 15.2MiB/s-21.9MiB/s (15.9MB/s-23.0MB/s), io=74.2MiB (77.9MB), run=1004-1045msec 00:31:06.958 WRITE: bw=73.1MiB/s (76.6MB/s), 15.3MiB/s-22.3MiB/s (16.1MB/s-23.3MB/s), io=76.4MiB (80.1MB), run=1004-1045msec 00:31:06.958 00:31:06.958 Disk stats (read/write): 00:31:06.958 nvme0n1: ios=4658/4687, merge=0/0, ticks=51266/49854, in_queue=101120, util=86.97% 00:31:06.958 nvme0n2: ios=4657/5077, merge=0/0, ticks=38744/37646, in_queue=76390, util=89.64% 00:31:06.958 nvme0n3: ios=3188/3542, merge=0/0, ticks=22124/28948, in_queue=51072, util=94.79% 00:31:06.958 nvme0n4: ios=3469/3584, merge=0/0, ticks=30759/28765, in_queue=59524, util=93.49% 00:31:06.958 09:15:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:31:06.958 [global] 00:31:06.958 thread=1 00:31:06.958 invalidate=1 00:31:06.958 rw=randwrite 00:31:06.958 time_based=1 00:31:06.958 runtime=1 00:31:06.958 ioengine=libaio 00:31:06.958 direct=1 00:31:06.958 bs=4096 00:31:06.958 iodepth=128 00:31:06.958 norandommap=0 00:31:06.958 numjobs=1 00:31:06.958 00:31:06.958 verify_dump=1 00:31:06.958 verify_backlog=512 00:31:06.958 verify_state_save=0 00:31:06.958 do_verify=1 00:31:06.958 verify=crc32c-intel 00:31:06.958 [job0] 00:31:06.958 filename=/dev/nvme0n1 00:31:06.958 [job1] 00:31:06.958 filename=/dev/nvme0n2 00:31:06.958 [job2] 00:31:06.958 filename=/dev/nvme0n3 00:31:06.958 [job3] 00:31:06.958 filename=/dev/nvme0n4 00:31:06.958 Could not set queue depth (nvme0n1) 00:31:06.958 Could not set queue depth (nvme0n2) 00:31:06.958 Could not set queue depth (nvme0n3) 00:31:06.958 Could not set queue depth (nvme0n4) 00:31:07.217 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:07.217 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:07.217 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:07.217 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:07.217 fio-3.35 00:31:07.217 Starting 4 threads 00:31:08.608 00:31:08.608 job0: (groupid=0, jobs=1): err= 0: pid=2562461: Wed Nov 20 09:15:24 2024 00:31:08.608 read: IOPS=5692, BW=22.2MiB/s (23.3MB/s)(22.3MiB/1004msec) 00:31:08.608 slat (nsec): min=1330, max=4982.4k, avg=82889.22, stdev=522084.72 00:31:08.608 clat (usec): min=951, max=17951, avg=10605.65, stdev=1748.51 00:31:08.608 lat (usec): min=4492, max=17956, avg=10688.54, stdev=1781.91 00:31:08.608 clat percentiles (usec): 00:31:08.608 | 1.00th=[ 6783], 5.00th=[ 7898], 10.00th=[ 8717], 20.00th=[ 9372], 00:31:08.608 | 30.00th=[ 9765], 40.00th=[10159], 50.00th=[10421], 60.00th=[10683], 00:31:08.608 | 70.00th=[11207], 80.00th=[11863], 90.00th=[12911], 95.00th=[13829], 00:31:08.608 | 99.00th=[15664], 99.50th=[15926], 99.90th=[16712], 99.95th=[16712], 00:31:08.608 | 99.99th=[17957] 00:31:08.608 write: IOPS=6119, BW=23.9MiB/s (25.1MB/s)(24.0MiB/1004msec); 0 zone resets 00:31:08.608 slat (nsec): min=1950, max=20642k, avg=80806.02, stdev=568038.24 00:31:08.608 clat (usec): min=4437, max=33543, avg=10867.81, stdev=2247.30 00:31:08.608 lat (usec): min=4463, max=33576, avg=10948.61, stdev=2284.82 00:31:08.608 clat percentiles (usec): 00:31:08.608 | 1.00th=[ 6456], 5.00th=[ 8160], 10.00th=[ 9372], 20.00th=[ 9896], 00:31:08.608 | 30.00th=[10159], 40.00th=[10290], 50.00th=[10421], 60.00th=[10552], 00:31:08.608 | 70.00th=[10945], 80.00th=[11731], 90.00th=[12518], 95.00th=[14484], 00:31:08.608 | 99.00th=[22152], 99.50th=[22414], 99.90th=[22414], 99.95th=[22414], 00:31:08.608 | 99.99th=[33424] 00:31:08.608 bw ( KiB/s): min=24216, max=24576, per=32.88%, avg=24396.00, stdev=254.56, samples=2 00:31:08.608 iops : min= 6054, max= 6144, avg=6099.00, stdev=63.64, samples=2 00:31:08.608 lat (usec) : 1000=0.01% 00:31:08.608 lat (msec) : 10=30.50%, 20=68.42%, 50=1.07% 00:31:08.608 cpu : usr=4.79%, sys=5.78%, ctx=484, majf=0, minf=1 00:31:08.608 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:31:08.608 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.608 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:08.608 issued rwts: total=5715,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.608 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:08.608 job1: (groupid=0, jobs=1): err= 0: pid=2562473: Wed Nov 20 09:15:24 2024 00:31:08.608 read: IOPS=4552, BW=17.8MiB/s (18.6MB/s)(17.9MiB/1007msec) 00:31:08.608 slat (nsec): min=1323, max=11641k, avg=105735.30, stdev=788192.96 00:31:08.608 clat (usec): min=1230, max=43323, avg=12932.22, stdev=4166.31 00:31:08.608 lat (usec): min=4032, max=43330, avg=13037.96, stdev=4232.87 00:31:08.608 clat percentiles (usec): 00:31:08.608 | 1.00th=[ 7767], 5.00th=[ 9372], 10.00th=[ 9896], 20.00th=[10421], 00:31:08.608 | 30.00th=[10945], 40.00th=[11338], 50.00th=[11863], 60.00th=[12780], 00:31:08.608 | 70.00th=[13042], 80.00th=[14222], 90.00th=[16909], 95.00th=[19530], 00:31:08.608 | 99.00th=[31851], 99.50th=[35914], 99.90th=[39584], 99.95th=[43254], 00:31:08.608 | 99.99th=[43254] 00:31:08.608 write: IOPS=4575, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1007msec); 0 zone resets 00:31:08.608 slat (usec): min=2, max=10635, avg=104.13, stdev=634.58 00:31:08.608 clat (usec): min=2485, max=43320, avg=14730.34, stdev=8377.82 00:31:08.608 lat (usec): min=2498, max=43327, avg=14834.46, stdev=8439.49 00:31:08.608 clat percentiles (usec): 00:31:08.608 | 1.00th=[ 5538], 5.00th=[ 7504], 10.00th=[ 8291], 20.00th=[ 9241], 00:31:08.608 | 30.00th=[ 9765], 40.00th=[10421], 50.00th=[11469], 60.00th=[11994], 00:31:08.608 | 70.00th=[13304], 80.00th=[17433], 90.00th=[32900], 95.00th=[33424], 00:31:08.608 | 99.00th=[33817], 99.50th=[33817], 99.90th=[34341], 99.95th=[43254], 00:31:08.608 | 99.99th=[43254] 00:31:08.608 bw ( KiB/s): min=17264, max=19600, per=24.84%, avg=18432.00, stdev=1651.80, samples=2 00:31:08.608 iops : min= 4316, max= 4900, avg=4608.00, stdev=412.95, samples=2 00:31:08.608 lat (msec) : 2=0.01%, 4=0.07%, 10=23.44%, 20=64.57%, 50=11.91% 00:31:08.608 cpu : usr=3.88%, sys=6.66%, ctx=332, majf=0, minf=2 00:31:08.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:31:08.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:08.609 issued rwts: total=4584,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:08.609 job2: (groupid=0, jobs=1): err= 0: pid=2562489: Wed Nov 20 09:15:24 2024 00:31:08.609 read: IOPS=2520, BW=9.84MiB/s (10.3MB/s)(9.88MiB/1003msec) 00:31:08.609 slat (nsec): min=1674, max=27435k, avg=232441.79, stdev=1605558.17 00:31:08.609 clat (usec): min=644, max=84463, avg=29520.40, stdev=22470.61 00:31:08.609 lat (usec): min=4029, max=84471, avg=29752.85, stdev=22577.91 00:31:08.609 clat percentiles (usec): 00:31:08.609 | 1.00th=[ 4293], 5.00th=[11994], 10.00th=[13173], 20.00th=[14091], 00:31:08.609 | 30.00th=[14484], 40.00th=[16188], 50.00th=[16909], 60.00th=[19530], 00:31:08.609 | 70.00th=[31327], 80.00th=[53216], 90.00th=[66847], 95.00th=[77071], 00:31:08.609 | 99.00th=[84411], 99.50th=[84411], 99.90th=[84411], 99.95th=[84411], 00:31:08.609 | 99.99th=[84411] 00:31:08.609 write: IOPS=2552, BW=9.97MiB/s (10.5MB/s)(10.0MiB/1003msec); 0 zone resets 00:31:08.609 slat (usec): min=2, max=21988, avg=154.13, stdev=1057.11 00:31:08.609 clat (usec): min=8677, max=81742, avg=20315.74, stdev=13749.87 00:31:08.609 lat (usec): min=8852, max=81752, avg=20469.88, stdev=13816.21 00:31:08.609 clat percentiles (usec): 00:31:08.609 | 1.00th=[11076], 5.00th=[11863], 10.00th=[13042], 20.00th=[13960], 00:31:08.609 | 30.00th=[14222], 40.00th=[14484], 50.00th=[16188], 60.00th=[16450], 00:31:08.609 | 70.00th=[16712], 80.00th=[16909], 90.00th=[45876], 95.00th=[50594], 00:31:08.609 | 99.00th=[73925], 99.50th=[74974], 99.90th=[81265], 99.95th=[81265], 00:31:08.609 | 99.99th=[81265] 00:31:08.609 bw ( KiB/s): min= 8200, max=12280, per=13.80%, avg=10240.00, stdev=2885.00, samples=2 00:31:08.609 iops : min= 2050, max= 3070, avg=2560.00, stdev=721.25, samples=2 00:31:08.609 lat (usec) : 750=0.02% 00:31:08.609 lat (msec) : 10=1.40%, 20=71.31%, 50=13.48%, 100=13.80% 00:31:08.609 cpu : usr=2.79%, sys=3.59%, ctx=233, majf=0, minf=1 00:31:08.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.6%, >=64=98.8% 00:31:08.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:08.609 issued rwts: total=2528,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:08.609 job3: (groupid=0, jobs=1): err= 0: pid=2562495: Wed Nov 20 09:15:24 2024 00:31:08.609 read: IOPS=5074, BW=19.8MiB/s (20.8MB/s)(20.0MiB/1009msec) 00:31:08.609 slat (nsec): min=1253, max=12352k, avg=101265.37, stdev=856904.08 00:31:08.609 clat (usec): min=3431, max=25797, avg=12433.02, stdev=3286.04 00:31:08.609 lat (usec): min=3518, max=29329, avg=12534.29, stdev=3362.58 00:31:08.609 clat percentiles (usec): 00:31:08.609 | 1.00th=[ 6718], 5.00th=[ 8586], 10.00th=[ 9634], 20.00th=[10421], 00:31:08.609 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11338], 60.00th=[11863], 00:31:08.609 | 70.00th=[13042], 80.00th=[14353], 90.00th=[17433], 95.00th=[19792], 00:31:08.609 | 99.00th=[21890], 99.50th=[23725], 99.90th=[25297], 99.95th=[25297], 00:31:08.609 | 99.99th=[25822] 00:31:08.609 write: IOPS=5357, BW=20.9MiB/s (21.9MB/s)(21.1MiB/1009msec); 0 zone resets 00:31:08.609 slat (usec): min=2, max=25420, avg=85.66, stdev=690.49 00:31:08.609 clat (usec): min=1469, max=39754, avg=11885.83, stdev=3996.26 00:31:08.609 lat (usec): min=1945, max=39788, avg=11971.49, stdev=4029.60 00:31:08.609 clat percentiles (usec): 00:31:08.609 | 1.00th=[ 3458], 5.00th=[ 6718], 10.00th=[ 7767], 20.00th=[ 9896], 00:31:08.609 | 30.00th=[10683], 40.00th=[11076], 50.00th=[11600], 60.00th=[11863], 00:31:08.609 | 70.00th=[12125], 80.00th=[13829], 90.00th=[15401], 95.00th=[18220], 00:31:08.609 | 99.00th=[30540], 99.50th=[30540], 99.90th=[30540], 99.95th=[30540], 00:31:08.609 | 99.99th=[39584] 00:31:08.609 bw ( KiB/s): min=20480, max=21744, per=28.45%, avg=21112.00, stdev=893.78, samples=2 00:31:08.609 iops : min= 5120, max= 5436, avg=5278.00, stdev=223.45, samples=2 00:31:08.609 lat (msec) : 2=0.07%, 4=0.72%, 10=17.58%, 20=77.55%, 50=4.09% 00:31:08.609 cpu : usr=3.57%, sys=4.46%, ctx=420, majf=0, minf=1 00:31:08.609 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:08.609 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:08.609 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:08.609 issued rwts: total=5120,5406,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:08.609 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:08.609 00:31:08.609 Run status group 0 (all jobs): 00:31:08.609 READ: bw=69.5MiB/s (72.9MB/s), 9.84MiB/s-22.2MiB/s (10.3MB/s-23.3MB/s), io=70.1MiB (73.5MB), run=1003-1009msec 00:31:08.609 WRITE: bw=72.5MiB/s (76.0MB/s), 9.97MiB/s-23.9MiB/s (10.5MB/s-25.1MB/s), io=73.1MiB (76.7MB), run=1003-1009msec 00:31:08.609 00:31:08.609 Disk stats (read/write): 00:31:08.609 nvme0n1: ios=5142/5127, merge=0/0, ticks=26217/28232, in_queue=54449, util=87.17% 00:31:08.609 nvme0n2: ios=3619/3919, merge=0/0, ticks=45571/58247, in_queue=103818, util=97.66% 00:31:08.609 nvme0n3: ios=1753/2048, merge=0/0, ticks=15988/10399, in_queue=26387, util=90.74% 00:31:08.609 nvme0n4: ios=4342/4608, merge=0/0, ticks=51673/53042, in_queue=104715, util=89.61% 00:31:08.609 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:31:08.609 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2562588 00:31:08.609 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:31:08.609 09:15:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:31:08.609 [global] 00:31:08.609 thread=1 00:31:08.609 invalidate=1 00:31:08.609 rw=read 00:31:08.609 time_based=1 00:31:08.609 runtime=10 00:31:08.609 ioengine=libaio 00:31:08.609 direct=1 00:31:08.609 bs=4096 00:31:08.609 iodepth=1 00:31:08.609 norandommap=1 00:31:08.609 numjobs=1 00:31:08.609 00:31:08.609 [job0] 00:31:08.609 filename=/dev/nvme0n1 00:31:08.609 [job1] 00:31:08.609 filename=/dev/nvme0n2 00:31:08.609 [job2] 00:31:08.609 filename=/dev/nvme0n3 00:31:08.609 [job3] 00:31:08.609 filename=/dev/nvme0n4 00:31:08.609 Could not set queue depth (nvme0n1) 00:31:08.609 Could not set queue depth (nvme0n2) 00:31:08.609 Could not set queue depth (nvme0n3) 00:31:08.609 Could not set queue depth (nvme0n4) 00:31:08.874 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:08.874 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:08.874 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:08.874 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:08.874 fio-3.35 00:31:08.874 Starting 4 threads 00:31:11.399 09:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:11.657 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=41529344, buflen=4096 00:31:11.657 fio: pid=2562910, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:11.657 09:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:11.915 09:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:11.915 09:15:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:11.915 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=44265472, buflen=4096 00:31:11.915 fio: pid=2562909, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:12.172 09:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:12.172 09:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:12.172 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=8171520, buflen=4096 00:31:12.172 fio: pid=2562898, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:12.429 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=1253376, buflen=4096 00:31:12.429 fio: pid=2562908, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:12.429 09:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:12.429 09:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:12.429 00:31:12.429 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2562898: Wed Nov 20 09:15:28 2024 00:31:12.429 read: IOPS=625, BW=2502KiB/s (2562kB/s)(7980KiB/3190msec) 00:31:12.429 slat (nsec): min=5634, max=67893, avg=8026.09, stdev=3357.30 00:31:12.429 clat (usec): min=192, max=41169, avg=1578.67, stdev=7231.26 00:31:12.429 lat (usec): min=199, max=41212, avg=1586.69, stdev=7234.07 00:31:12.429 clat percentiles (usec): 00:31:12.429 | 1.00th=[ 223], 5.00th=[ 231], 10.00th=[ 235], 20.00th=[ 239], 00:31:12.429 | 30.00th=[ 243], 40.00th=[ 245], 50.00th=[ 249], 60.00th=[ 253], 00:31:12.429 | 70.00th=[ 260], 80.00th=[ 265], 90.00th=[ 277], 95.00th=[ 297], 00:31:12.429 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:12.429 | 99.99th=[41157] 00:31:12.429 bw ( KiB/s): min= 96, max=10717, per=8.36%, avg=2303.50, stdev=4251.33, samples=6 00:31:12.429 iops : min= 24, max= 2679, avg=575.83, stdev=1062.73, samples=6 00:31:12.429 lat (usec) : 250=52.35%, 500=44.34% 00:31:12.429 lat (msec) : 50=3.26% 00:31:12.429 cpu : usr=0.13%, sys=0.66%, ctx=2000, majf=0, minf=1 00:31:12.429 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:12.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.429 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.429 issued rwts: total=1996,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.429 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:12.429 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2562908: Wed Nov 20 09:15:28 2024 00:31:12.429 read: IOPS=90, BW=362KiB/s (371kB/s)(1224KiB/3377msec) 00:31:12.429 slat (usec): min=7, max=12766, avg=53.63, stdev=727.99 00:31:12.429 clat (usec): min=200, max=48136, avg=10906.82, stdev=17982.17 00:31:12.429 lat (usec): min=208, max=53939, avg=10960.54, stdev=18072.94 00:31:12.429 clat percentiles (usec): 00:31:12.429 | 1.00th=[ 208], 5.00th=[ 212], 10.00th=[ 217], 20.00th=[ 219], 00:31:12.429 | 30.00th=[ 223], 40.00th=[ 227], 50.00th=[ 231], 60.00th=[ 235], 00:31:12.429 | 70.00th=[ 247], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:12.429 | 99.00th=[41157], 99.50th=[41681], 99.90th=[47973], 99.95th=[47973], 00:31:12.429 | 99.99th=[47973] 00:31:12.429 bw ( KiB/s): min= 96, max= 1582, per=1.26%, avg=347.67, stdev=604.71, samples=6 00:31:12.429 iops : min= 24, max= 395, avg=86.83, stdev=150.97, samples=6 00:31:12.429 lat (usec) : 250=71.01%, 500=2.61% 00:31:12.429 lat (msec) : 50=26.06% 00:31:12.429 cpu : usr=0.09%, sys=0.09%, ctx=311, majf=0, minf=2 00:31:12.429 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:12.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.429 complete : 0=0.3%, 4=99.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.429 issued rwts: total=307,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.429 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:12.429 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2562909: Wed Nov 20 09:15:28 2024 00:31:12.429 read: IOPS=3621, BW=14.1MiB/s (14.8MB/s)(42.2MiB/2984msec) 00:31:12.429 slat (nsec): min=4378, max=37246, avg=7126.83, stdev=1271.28 00:31:12.429 clat (usec): min=190, max=40480, avg=265.99, stdev=388.32 00:31:12.429 lat (usec): min=195, max=40488, avg=273.12, stdev=388.34 00:31:12.429 clat percentiles (usec): 00:31:12.429 | 1.00th=[ 206], 5.00th=[ 217], 10.00th=[ 233], 20.00th=[ 243], 00:31:12.429 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 253], 60.00th=[ 258], 00:31:12.429 | 70.00th=[ 269], 80.00th=[ 293], 90.00th=[ 302], 95.00th=[ 310], 00:31:12.429 | 99.00th=[ 371], 99.50th=[ 416], 99.90th=[ 510], 99.95th=[ 523], 00:31:12.429 | 99.99th=[ 717] 00:31:12.429 bw ( KiB/s): min=13336, max=15528, per=52.97%, avg=14587.20, stdev=1118.38, samples=5 00:31:12.429 iops : min= 3334, max= 3882, avg=3646.80, stdev=279.59, samples=5 00:31:12.429 lat (usec) : 250=41.73%, 500=58.07%, 750=0.19% 00:31:12.429 lat (msec) : 50=0.01% 00:31:12.429 cpu : usr=0.80%, sys=3.15%, ctx=10808, majf=0, minf=1 00:31:12.429 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:12.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.429 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.429 issued rwts: total=10808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.429 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:12.429 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=2562910: Wed Nov 20 09:15:28 2024 00:31:12.429 read: IOPS=3726, BW=14.6MiB/s (15.3MB/s)(39.6MiB/2721msec) 00:31:12.429 slat (nsec): min=6240, max=34586, avg=7212.04, stdev=967.02 00:31:12.429 clat (usec): min=190, max=520, avg=258.05, stdev=31.32 00:31:12.429 lat (usec): min=197, max=528, avg=265.26, stdev=31.39 00:31:12.429 clat percentiles (usec): 00:31:12.429 | 1.00th=[ 223], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 245], 00:31:12.429 | 30.00th=[ 247], 40.00th=[ 249], 50.00th=[ 251], 60.00th=[ 253], 00:31:12.429 | 70.00th=[ 255], 80.00th=[ 260], 90.00th=[ 277], 95.00th=[ 314], 00:31:12.429 | 99.00th=[ 412], 99.50th=[ 457], 99.90th=[ 506], 99.95th=[ 510], 00:31:12.429 | 99.99th=[ 519] 00:31:12.429 bw ( KiB/s): min=14848, max=15512, per=55.42%, avg=15260.80, stdev=334.79, samples=5 00:31:12.429 iops : min= 3712, max= 3878, avg=3815.20, stdev=83.70, samples=5 00:31:12.429 lat (usec) : 250=46.18%, 500=53.65%, 750=0.16% 00:31:12.429 cpu : usr=0.99%, sys=3.27%, ctx=10140, majf=0, minf=2 00:31:12.429 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:12.429 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.429 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:12.429 issued rwts: total=10140,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:12.429 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:12.429 00:31:12.429 Run status group 0 (all jobs): 00:31:12.429 READ: bw=26.9MiB/s (28.2MB/s), 362KiB/s-14.6MiB/s (371kB/s-15.3MB/s), io=90.8MiB (95.2MB), run=2721-3377msec 00:31:12.429 00:31:12.429 Disk stats (read/write): 00:31:12.429 nvme0n1: ios=2034/0, merge=0/0, ticks=4145/0, in_queue=4145, util=99.51% 00:31:12.429 nvme0n2: ios=347/0, merge=0/0, ticks=4370/0, in_queue=4370, util=99.20% 00:31:12.429 nvme0n3: ios=10508/0, merge=0/0, ticks=2719/0, in_queue=2719, util=96.55% 00:31:12.429 nvme0n4: ios=9847/0, merge=0/0, ticks=2491/0, in_queue=2491, util=96.44% 00:31:12.429 09:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:12.429 09:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:12.685 09:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:12.685 09:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:12.942 09:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:12.942 09:15:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:13.200 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:13.200 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:13.457 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:13.457 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 2562588 00:31:13.457 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:13.457 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:13.457 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:13.457 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:13.457 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:31:13.457 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:13.457 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:13.457 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:13.457 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:13.457 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:31:13.457 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:13.457 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:13.457 nvmf hotplug test: fio failed as expected 00:31:13.457 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:13.715 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:13.715 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:13.715 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:13.715 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:13.715 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:13.715 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:31:13.715 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@99 -- # sync 00:31:13.715 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:31:13.715 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@102 -- # set +e 00:31:13.715 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:31:13.715 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:31:13.715 rmmod nvme_tcp 00:31:13.715 rmmod nvme_fabrics 00:31:13.715 rmmod nvme_keyring 00:31:13.715 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:31:13.715 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@106 -- # set -e 00:31:13.715 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@107 -- # return 0 00:31:13.715 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # '[' -n 2560085 ']' 00:31:13.715 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@337 -- # killprocess 2560085 00:31:13.715 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 2560085 ']' 00:31:13.715 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 2560085 00:31:13.715 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:31:13.715 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:13.715 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2560085 00:31:13.974 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:13.974 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:13.974 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2560085' 00:31:13.974 killing process with pid 2560085 00:31:13.974 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 2560085 00:31:13.974 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 2560085 00:31:13.974 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:31:13.974 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@342 -- # nvmf_fini 00:31:13.974 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@264 -- # local dev 00:31:13.974 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@267 -- # remove_target_ns 00:31:13.974 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:13.974 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:13.974 09:15:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:16.510 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@268 -- # delete_main_bridge 00:31:16.510 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:31:16.510 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@130 -- # return 0 00:31:16.510 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:31:16.510 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:31:16.510 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:31:16.510 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:31:16.510 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:31:16.510 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:31:16.510 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:31:16.510 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:31:16.510 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:31:16.510 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:31:16.510 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:31:16.510 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:31:16.510 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:31:16.510 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:31:16.510 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:31:16.510 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:31:16.510 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:31:16.510 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@41 -- # _dev=0 00:31:16.510 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@41 -- # dev_map=() 00:31:16.510 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@284 -- # iptr 00:31:16.510 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@542 -- # iptables-save 00:31:16.510 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:31:16.510 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@542 -- # iptables-restore 00:31:16.510 00:31:16.510 real 0m26.008s 00:31:16.510 user 1m31.794s 00:31:16.510 sys 0m11.009s 00:31:16.510 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:16.510 09:15:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:16.510 ************************************ 00:31:16.510 END TEST nvmf_fio_target 00:31:16.510 ************************************ 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:16.511 ************************************ 00:31:16.511 START TEST nvmf_bdevio 00:31:16.511 ************************************ 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:16.511 * Looking for test storage... 00:31:16.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:16.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.511 --rc genhtml_branch_coverage=1 00:31:16.511 --rc genhtml_function_coverage=1 00:31:16.511 --rc genhtml_legend=1 00:31:16.511 --rc geninfo_all_blocks=1 00:31:16.511 --rc geninfo_unexecuted_blocks=1 00:31:16.511 00:31:16.511 ' 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:16.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.511 --rc genhtml_branch_coverage=1 00:31:16.511 --rc genhtml_function_coverage=1 00:31:16.511 --rc genhtml_legend=1 00:31:16.511 --rc geninfo_all_blocks=1 00:31:16.511 --rc geninfo_unexecuted_blocks=1 00:31:16.511 00:31:16.511 ' 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:16.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.511 --rc genhtml_branch_coverage=1 00:31:16.511 --rc genhtml_function_coverage=1 00:31:16.511 --rc genhtml_legend=1 00:31:16.511 --rc geninfo_all_blocks=1 00:31:16.511 --rc geninfo_unexecuted_blocks=1 00:31:16.511 00:31:16.511 ' 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:16.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:16.511 --rc genhtml_branch_coverage=1 00:31:16.511 --rc genhtml_function_coverage=1 00:31:16.511 --rc genhtml_legend=1 00:31:16.511 --rc geninfo_all_blocks=1 00:31:16.511 --rc geninfo_unexecuted_blocks=1 00:31:16.511 00:31:16.511 ' 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:16.511 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:31:16.512 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:31:16.512 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:16.512 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:31:16.512 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@50 -- # : 0 00:31:16.512 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:31:16.512 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:31:16.512 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:31:16.512 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:16.512 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:16.512 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:31:16.512 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:31:16.512 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:31:16.512 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:31:16.512 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@54 -- # have_pci_nics=0 00:31:16.512 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:16.512 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:16.512 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:16.512 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:31:16.512 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:16.512 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@296 -- # prepare_net_devs 00:31:16.512 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # local -g is_hw=no 00:31:16.512 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@260 -- # remove_target_ns 00:31:16.512 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:16.512 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:16.512 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:16.512 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:31:16.512 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:31:16.512 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # xtrace_disable 00:31:16.512 09:15:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@131 -- # pci_devs=() 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@131 -- # local -a pci_devs 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@132 -- # pci_net_devs=() 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@133 -- # pci_drivers=() 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@133 -- # local -A pci_drivers 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@135 -- # net_devs=() 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@135 -- # local -ga net_devs 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@136 -- # e810=() 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@136 -- # local -ga e810 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@137 -- # x722=() 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@137 -- # local -ga x722 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@138 -- # mlx=() 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@138 -- # local -ga mlx 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:31:23.078 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:23.079 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:23.079 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:23.079 Found net devices under 0000:86:00.0: cvl_0_0 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:23.079 Found net devices under 0000:86:00.1: cvl_0_1 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # is_hw=yes 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@257 -- # create_target_ns 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@27 -- # local -gA dev_map 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@28 -- # local -g _dev 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@44 -- # ips=() 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:31:23.079 09:15:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:31:23.079 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:31:23.079 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:31:23.079 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:23.079 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:31:23.079 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772161 00:31:23.079 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:31:23.079 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:31:23.079 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:31:23.079 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:31:23.079 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:31:23.079 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:31:23.079 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:31:23.079 10.0.0.1 00:31:23.079 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:31:23.079 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:31:23.079 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:23.079 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:23.079 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:31:23.079 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772162 00:31:23.079 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:31:23.079 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:31:23.079 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:31:23.079 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:31:23.080 10.0.0.2 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@38 -- # ping_ips 1 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=initiator0 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:31:23.080 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:23.080 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.457 ms 00:31:23.080 00:31:23.080 --- 10.0.0.1 ping statistics --- 00:31:23.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.080 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target0 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target0 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:31:23.080 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:23.080 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:31:23.080 00:31:23.080 --- 10.0.0.2 ping statistics --- 00:31:23.080 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:23.080 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair++ )) 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@270 -- # return 0 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=initiator0 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:31:23.080 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=initiator1 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # return 1 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev= 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@169 -- # return 0 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target0 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target0 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target1 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target1 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # return 1 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev= 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@169 -- # return 0 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # nvmfpid=2567161 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@329 -- # waitforlisten 2567161 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 2567161 ']' 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:23.081 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:23.081 09:15:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:23.081 [2024-11-20 09:15:38.362834] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:23.081 [2024-11-20 09:15:38.363939] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:31:23.081 [2024-11-20 09:15:38.363988] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:23.081 [2024-11-20 09:15:38.441431] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:23.081 [2024-11-20 09:15:38.483887] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:23.081 [2024-11-20 09:15:38.483920] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:23.081 [2024-11-20 09:15:38.483928] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:23.081 [2024-11-20 09:15:38.483934] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:23.081 [2024-11-20 09:15:38.483939] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:23.081 [2024-11-20 09:15:38.485385] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:31:23.081 [2024-11-20 09:15:38.485496] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:31:23.081 [2024-11-20 09:15:38.485625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:23.081 [2024-11-20 09:15:38.485626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:31:23.081 [2024-11-20 09:15:38.553961] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:23.081 [2024-11-20 09:15:38.554785] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:23.081 [2024-11-20 09:15:38.554892] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:23.081 [2024-11-20 09:15:38.555329] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:23.081 [2024-11-20 09:15:38.555374] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:23.339 [2024-11-20 09:15:39.242369] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:23.339 Malloc0 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:23.339 [2024-11-20 09:15:39.326545] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # config=() 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # local subsystem config 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:31:23.339 { 00:31:23.339 "params": { 00:31:23.339 "name": "Nvme$subsystem", 00:31:23.339 "trtype": "$TEST_TRANSPORT", 00:31:23.339 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:23.339 "adrfam": "ipv4", 00:31:23.339 "trsvcid": "$NVMF_PORT", 00:31:23.339 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:23.339 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:23.339 "hdgst": ${hdgst:-false}, 00:31:23.339 "ddgst": ${ddgst:-false} 00:31:23.339 }, 00:31:23.339 "method": "bdev_nvme_attach_controller" 00:31:23.339 } 00:31:23.339 EOF 00:31:23.339 )") 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@394 -- # cat 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # jq . 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@397 -- # IFS=, 00:31:23.339 09:15:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:31:23.339 "params": { 00:31:23.339 "name": "Nvme1", 00:31:23.339 "trtype": "tcp", 00:31:23.339 "traddr": "10.0.0.2", 00:31:23.339 "adrfam": "ipv4", 00:31:23.339 "trsvcid": "4420", 00:31:23.339 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:23.339 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:23.339 "hdgst": false, 00:31:23.339 "ddgst": false 00:31:23.339 }, 00:31:23.339 "method": "bdev_nvme_attach_controller" 00:31:23.339 }' 00:31:23.339 [2024-11-20 09:15:39.377058] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:31:23.339 [2024-11-20 09:15:39.377108] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2567412 ] 00:31:23.595 [2024-11-20 09:15:39.454983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:31:23.595 [2024-11-20 09:15:39.499074] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:23.595 [2024-11-20 09:15:39.499181] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:23.595 [2024-11-20 09:15:39.499182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:23.852 I/O targets: 00:31:23.852 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:31:23.852 00:31:23.852 00:31:23.852 CUnit - A unit testing framework for C - Version 2.1-3 00:31:23.852 http://cunit.sourceforge.net/ 00:31:23.852 00:31:23.852 00:31:23.852 Suite: bdevio tests on: Nvme1n1 00:31:24.109 Test: blockdev write read block ...passed 00:31:24.109 Test: blockdev write zeroes read block ...passed 00:31:24.109 Test: blockdev write zeroes read no split ...passed 00:31:24.109 Test: blockdev write zeroes read split ...passed 00:31:24.109 Test: blockdev write zeroes read split partial ...passed 00:31:24.109 Test: blockdev reset ...[2024-11-20 09:15:40.001466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:31:24.109 [2024-11-20 09:15:40.001532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x157d340 (9): Bad file descriptor 00:31:24.110 [2024-11-20 09:15:40.004920] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:31:24.110 passed 00:31:24.110 Test: blockdev write read 8 blocks ...passed 00:31:24.110 Test: blockdev write read size > 128k ...passed 00:31:24.110 Test: blockdev write read invalid size ...passed 00:31:24.110 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:31:24.110 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:31:24.110 Test: blockdev write read max offset ...passed 00:31:24.366 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:31:24.366 Test: blockdev writev readv 8 blocks ...passed 00:31:24.366 Test: blockdev writev readv 30 x 1block ...passed 00:31:24.367 Test: blockdev writev readv block ...passed 00:31:24.367 Test: blockdev writev readv size > 128k ...passed 00:31:24.367 Test: blockdev writev readv size > 128k in two iovs ...passed 00:31:24.367 Test: blockdev comparev and writev ...[2024-11-20 09:15:40.216017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:24.367 [2024-11-20 09:15:40.216046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:31:24.367 [2024-11-20 09:15:40.216061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:24.367 [2024-11-20 09:15:40.216069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:31:24.367 [2024-11-20 09:15:40.216365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:24.367 [2024-11-20 09:15:40.216376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:31:24.367 [2024-11-20 09:15:40.216388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:24.367 [2024-11-20 09:15:40.216395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:31:24.367 [2024-11-20 09:15:40.216671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:24.367 [2024-11-20 09:15:40.216686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:31:24.367 [2024-11-20 09:15:40.216698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:24.367 [2024-11-20 09:15:40.216705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:31:24.367 [2024-11-20 09:15:40.217003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:24.367 [2024-11-20 09:15:40.217014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:31:24.367 [2024-11-20 09:15:40.217026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:31:24.367 [2024-11-20 09:15:40.217033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:31:24.367 passed 00:31:24.367 Test: blockdev nvme passthru rw ...passed 00:31:24.367 Test: blockdev nvme passthru vendor specific ...[2024-11-20 09:15:40.301313] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:24.367 [2024-11-20 09:15:40.301329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:31:24.367 [2024-11-20 09:15:40.301440] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:24.367 [2024-11-20 09:15:40.301449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:31:24.367 [2024-11-20 09:15:40.301557] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:24.367 [2024-11-20 09:15:40.301567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:31:24.367 [2024-11-20 09:15:40.301675] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:31:24.367 [2024-11-20 09:15:40.301684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:31:24.367 passed 00:31:24.367 Test: blockdev nvme admin passthru ...passed 00:31:24.367 Test: blockdev copy ...passed 00:31:24.367 00:31:24.367 Run Summary: Type Total Ran Passed Failed Inactive 00:31:24.367 suites 1 1 n/a 0 0 00:31:24.367 tests 23 23 23 0 0 00:31:24.367 asserts 152 152 152 0 n/a 00:31:24.367 00:31:24.367 Elapsed time = 1.014 seconds 00:31:24.625 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:24.625 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:24.625 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:24.625 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:24.625 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:31:24.625 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:31:24.625 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@335 -- # nvmfcleanup 00:31:24.625 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@99 -- # sync 00:31:24.625 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:31:24.625 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@102 -- # set +e 00:31:24.625 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@103 -- # for i in {1..20} 00:31:24.625 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:31:24.625 rmmod nvme_tcp 00:31:24.625 rmmod nvme_fabrics 00:31:24.625 rmmod nvme_keyring 00:31:24.625 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:31:24.625 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@106 -- # set -e 00:31:24.625 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@107 -- # return 0 00:31:24.625 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # '[' -n 2567161 ']' 00:31:24.625 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@337 -- # killprocess 2567161 00:31:24.625 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 2567161 ']' 00:31:24.625 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 2567161 00:31:24.625 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:31:24.625 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:24.625 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2567161 00:31:24.625 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:31:24.625 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:31:24.625 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2567161' 00:31:24.625 killing process with pid 2567161 00:31:24.625 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 2567161 00:31:24.625 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 2567161 00:31:24.884 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:31:24.884 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@342 -- # nvmf_fini 00:31:24.884 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@264 -- # local dev 00:31:24.884 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@267 -- # remove_target_ns 00:31:24.884 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:24.884 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:24.884 09:15:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@268 -- # delete_main_bridge 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@130 -- # return 0 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@41 -- # _dev=0 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@41 -- # dev_map=() 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@284 -- # iptr 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@542 -- # iptables-save 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@542 -- # iptables-restore 00:31:27.420 00:31:27.420 real 0m10.815s 00:31:27.420 user 0m9.637s 00:31:27.420 sys 0m5.401s 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:31:27.420 ************************************ 00:31:27.420 END TEST nvmf_bdevio 00:31:27.420 ************************************ 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # [[ tcp == \t\c\p ]] 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # [[ phy != phy ]] 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:27.420 ************************************ 00:31:27.420 START TEST nvmf_zcopy 00:31:27.420 ************************************ 00:31:27.420 09:15:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:31:27.420 * Looking for test storage... 00:31:27.420 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:27.420 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:27.420 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:31:27.420 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:27.420 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:27.420 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:27.420 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:27.420 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:27.420 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:31:27.420 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:31:27.420 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:31:27.420 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:31:27.420 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:31:27.420 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:31:27.420 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:31:27.420 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:27.420 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:31:27.420 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:31:27.420 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:27.420 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:27.420 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:31:27.420 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:31:27.420 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:27.420 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:31:27.420 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:31:27.420 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:31:27.420 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:31:27.420 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:27.420 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:31:27.420 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:27.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.421 --rc genhtml_branch_coverage=1 00:31:27.421 --rc genhtml_function_coverage=1 00:31:27.421 --rc genhtml_legend=1 00:31:27.421 --rc geninfo_all_blocks=1 00:31:27.421 --rc geninfo_unexecuted_blocks=1 00:31:27.421 00:31:27.421 ' 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:27.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.421 --rc genhtml_branch_coverage=1 00:31:27.421 --rc genhtml_function_coverage=1 00:31:27.421 --rc genhtml_legend=1 00:31:27.421 --rc geninfo_all_blocks=1 00:31:27.421 --rc geninfo_unexecuted_blocks=1 00:31:27.421 00:31:27.421 ' 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:27.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.421 --rc genhtml_branch_coverage=1 00:31:27.421 --rc genhtml_function_coverage=1 00:31:27.421 --rc genhtml_legend=1 00:31:27.421 --rc geninfo_all_blocks=1 00:31:27.421 --rc geninfo_unexecuted_blocks=1 00:31:27.421 00:31:27.421 ' 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:27.421 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.421 --rc genhtml_branch_coverage=1 00:31:27.421 --rc genhtml_function_coverage=1 00:31:27.421 --rc genhtml_legend=1 00:31:27.421 --rc geninfo_all_blocks=1 00:31:27.421 --rc geninfo_unexecuted_blocks=1 00:31:27.421 00:31:27.421 ' 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@50 -- # : 0 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@54 -- # have_pci_nics=0 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@296 -- # prepare_net_devs 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # local -g is_hw=no 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@260 -- # remove_target_ns 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # xtrace_disable 00:31:27.421 09:15:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:33.987 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:33.987 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@131 -- # pci_devs=() 00:31:33.987 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@131 -- # local -a pci_devs 00:31:33.987 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@132 -- # pci_net_devs=() 00:31:33.987 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:31:33.987 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@133 -- # pci_drivers=() 00:31:33.987 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@133 -- # local -A pci_drivers 00:31:33.987 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@135 -- # net_devs=() 00:31:33.987 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@135 -- # local -ga net_devs 00:31:33.987 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@136 -- # e810=() 00:31:33.987 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@136 -- # local -ga e810 00:31:33.987 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@137 -- # x722=() 00:31:33.987 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@137 -- # local -ga x722 00:31:33.987 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@138 -- # mlx=() 00:31:33.987 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@138 -- # local -ga mlx 00:31:33.987 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:33.987 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:33.987 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:33.987 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:33.987 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:33.987 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:33.987 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:33.987 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:33.987 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:33.987 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:33.987 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:33.987 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:33.988 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:33.988 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:33.988 Found net devices under 0000:86:00.0: cvl_0_0 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:33.988 Found net devices under 0000:86:00.1: cvl_0_1 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # is_hw=yes 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@257 -- # create_target_ns 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@27 -- # local -gA dev_map 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@28 -- # local -g _dev 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@44 -- # ips=() 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772161 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:31:33.988 10.0.0.1 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:33.988 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:31:33.989 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772162 00:31:33.989 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:31:33.989 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:31:33.989 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:31:33.989 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:31:33.989 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:31:33.989 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:31:33.989 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:31:33.989 10.0.0.2 00:31:33.989 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:31:33.989 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:31:33.989 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:31:33.989 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:31:33.989 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:31:33.989 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:31:33.989 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:31:33.989 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:33.989 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:33.989 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:31:33.989 09:15:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@38 -- # ping_ips 1 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=initiator0 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:31:33.989 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:33.989 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.472 ms 00:31:33.989 00:31:33.989 --- 10.0.0.1 ping statistics --- 00:31:33.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:33.989 rtt min/avg/max/mdev = 0.472/0.472/0.472/0.000 ms 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev target0 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=target0 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:31:33.989 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:33.989 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.128 ms 00:31:33.989 00:31:33.989 --- 10.0.0.2 ping statistics --- 00:31:33.989 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:33.989 rtt min/avg/max/mdev = 0.128/0.128/0.128/0.000 ms 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair++ )) 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@270 -- # return 0 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=initiator0 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:31:33.989 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=initiator1 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # return 1 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev= 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@169 -- # return 0 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev target0 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=target0 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev target1 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=target1 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # return 1 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev= 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@169 -- # return 0 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # nvmfpid=2571021 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@329 -- # waitforlisten 2571021 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 2571021 ']' 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:33.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:33.990 [2024-11-20 09:15:49.263234] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:33.990 [2024-11-20 09:15:49.264225] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:31:33.990 [2024-11-20 09:15:49.264266] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:33.990 [2024-11-20 09:15:49.344335] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:33.990 [2024-11-20 09:15:49.385762] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:33.990 [2024-11-20 09:15:49.385796] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:33.990 [2024-11-20 09:15:49.385803] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:33.990 [2024-11-20 09:15:49.385809] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:33.990 [2024-11-20 09:15:49.385814] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:33.990 [2024-11-20 09:15:49.386363] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:33.990 [2024-11-20 09:15:49.453123] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:33.990 [2024-11-20 09:15:49.453336] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:33.990 [2024-11-20 09:15:49.519057] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@20 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:33.990 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.991 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:33.991 [2024-11-20 09:15:49.547294] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:33.991 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.991 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:31:33.991 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.991 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:33.991 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.991 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:31:33.991 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.991 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:33.991 malloc0 00:31:33.991 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.991 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:31:33.991 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:33.991 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:33.991 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:33.991 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:31:33.991 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@28 -- # gen_nvmf_target_json 00:31:33.991 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:31:33.991 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:31:33.991 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:31:33.991 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:31:33.991 { 00:31:33.991 "params": { 00:31:33.991 "name": "Nvme$subsystem", 00:31:33.991 "trtype": "$TEST_TRANSPORT", 00:31:33.991 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:33.991 "adrfam": "ipv4", 00:31:33.991 "trsvcid": "$NVMF_PORT", 00:31:33.991 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:33.991 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:33.991 "hdgst": ${hdgst:-false}, 00:31:33.991 "ddgst": ${ddgst:-false} 00:31:33.991 }, 00:31:33.991 "method": "bdev_nvme_attach_controller" 00:31:33.991 } 00:31:33.991 EOF 00:31:33.991 )") 00:31:33.991 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:31:33.991 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:31:33.991 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:31:33.991 09:15:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:31:33.991 "params": { 00:31:33.991 "name": "Nvme1", 00:31:33.991 "trtype": "tcp", 00:31:33.991 "traddr": "10.0.0.2", 00:31:33.991 "adrfam": "ipv4", 00:31:33.991 "trsvcid": "4420", 00:31:33.991 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:33.991 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:33.991 "hdgst": false, 00:31:33.991 "ddgst": false 00:31:33.991 }, 00:31:33.991 "method": "bdev_nvme_attach_controller" 00:31:33.991 }' 00:31:33.991 [2024-11-20 09:15:49.643388] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:31:33.991 [2024-11-20 09:15:49.643445] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2571205 ] 00:31:33.991 [2024-11-20 09:15:49.720896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:33.991 [2024-11-20 09:15:49.762367] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:33.991 Running I/O for 10 seconds... 00:31:36.304 8229.00 IOPS, 64.29 MiB/s [2024-11-20T08:15:53.280Z] 8325.00 IOPS, 65.04 MiB/s [2024-11-20T08:15:54.219Z] 8357.33 IOPS, 65.29 MiB/s [2024-11-20T08:15:55.156Z] 8365.00 IOPS, 65.35 MiB/s [2024-11-20T08:15:56.093Z] 8370.80 IOPS, 65.40 MiB/s [2024-11-20T08:15:57.029Z] 8378.67 IOPS, 65.46 MiB/s [2024-11-20T08:15:57.964Z] 8387.43 IOPS, 65.53 MiB/s [2024-11-20T08:15:59.350Z] 8387.75 IOPS, 65.53 MiB/s [2024-11-20T08:16:00.290Z] 8386.89 IOPS, 65.52 MiB/s [2024-11-20T08:16:00.290Z] 8390.00 IOPS, 65.55 MiB/s 00:31:44.249 Latency(us) 00:31:44.249 [2024-11-20T08:16:00.290Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:44.249 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:31:44.249 Verification LBA range: start 0x0 length 0x1000 00:31:44.249 Nvme1n1 : 10.01 8394.24 65.58 0.00 0.00 15205.78 1203.87 21541.40 00:31:44.249 [2024-11-20T08:16:00.290Z] =================================================================================================================== 00:31:44.249 [2024-11-20T08:16:00.290Z] Total : 8394.24 65.58 0.00 0.00 15205.78 1203.87 21541.40 00:31:44.249 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@34 -- # perfpid=2572813 00:31:44.249 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@36 -- # xtrace_disable 00:31:44.249 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:44.249 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:31:44.249 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@32 -- # gen_nvmf_target_json 00:31:44.249 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:31:44.249 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:31:44.249 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:31:44.249 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:31:44.249 { 00:31:44.249 "params": { 00:31:44.249 "name": "Nvme$subsystem", 00:31:44.249 "trtype": "$TEST_TRANSPORT", 00:31:44.249 "traddr": "$NVMF_FIRST_TARGET_IP", 00:31:44.249 "adrfam": "ipv4", 00:31:44.249 "trsvcid": "$NVMF_PORT", 00:31:44.249 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:31:44.249 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:31:44.249 "hdgst": ${hdgst:-false}, 00:31:44.249 "ddgst": ${ddgst:-false} 00:31:44.249 }, 00:31:44.249 "method": "bdev_nvme_attach_controller" 00:31:44.249 } 00:31:44.249 EOF 00:31:44.249 )") 00:31:44.249 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:31:44.249 [2024-11-20 09:16:00.118706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.249 [2024-11-20 09:16:00.118736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.249 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:31:44.249 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:31:44.249 09:16:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:31:44.249 "params": { 00:31:44.249 "name": "Nvme1", 00:31:44.249 "trtype": "tcp", 00:31:44.249 "traddr": "10.0.0.2", 00:31:44.249 "adrfam": "ipv4", 00:31:44.249 "trsvcid": "4420", 00:31:44.249 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:31:44.249 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:31:44.249 "hdgst": false, 00:31:44.249 "ddgst": false 00:31:44.249 }, 00:31:44.249 "method": "bdev_nvme_attach_controller" 00:31:44.249 }' 00:31:44.249 [2024-11-20 09:16:00.130669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.249 [2024-11-20 09:16:00.130684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.249 [2024-11-20 09:16:00.142662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.249 [2024-11-20 09:16:00.142673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.249 [2024-11-20 09:16:00.154664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.249 [2024-11-20 09:16:00.154673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.249 [2024-11-20 09:16:00.157301] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:31:44.249 [2024-11-20 09:16:00.157351] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2572813 ] 00:31:44.249 [2024-11-20 09:16:00.166665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.249 [2024-11-20 09:16:00.166677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.249 [2024-11-20 09:16:00.178661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.249 [2024-11-20 09:16:00.178672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.249 [2024-11-20 09:16:00.190663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.249 [2024-11-20 09:16:00.190673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.249 [2024-11-20 09:16:00.202662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.249 [2024-11-20 09:16:00.202672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.249 [2024-11-20 09:16:00.214662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.249 [2024-11-20 09:16:00.214672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.249 [2024-11-20 09:16:00.226663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.249 [2024-11-20 09:16:00.226672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.249 [2024-11-20 09:16:00.232851] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:44.249 [2024-11-20 09:16:00.238665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.249 [2024-11-20 09:16:00.238683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.249 [2024-11-20 09:16:00.250662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.249 [2024-11-20 09:16:00.250675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.249 [2024-11-20 09:16:00.262663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.249 [2024-11-20 09:16:00.262672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.249 [2024-11-20 09:16:00.273752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:44.249 [2024-11-20 09:16:00.274665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.249 [2024-11-20 09:16:00.274677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.249 [2024-11-20 09:16:00.286679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.249 [2024-11-20 09:16:00.286699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.509 [2024-11-20 09:16:00.298669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.509 [2024-11-20 09:16:00.298688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.509 [2024-11-20 09:16:00.310666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.509 [2024-11-20 09:16:00.310679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.509 [2024-11-20 09:16:00.322663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.509 [2024-11-20 09:16:00.322676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.509 [2024-11-20 09:16:00.334666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.509 [2024-11-20 09:16:00.334679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.509 [2024-11-20 09:16:00.346661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.509 [2024-11-20 09:16:00.346671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.509 [2024-11-20 09:16:00.358671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.509 [2024-11-20 09:16:00.358690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.509 [2024-11-20 09:16:00.370669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.509 [2024-11-20 09:16:00.370684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.509 [2024-11-20 09:16:00.382671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.509 [2024-11-20 09:16:00.382686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.509 [2024-11-20 09:16:00.394665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.509 [2024-11-20 09:16:00.394677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.509 [2024-11-20 09:16:00.406666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.509 [2024-11-20 09:16:00.406676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.509 [2024-11-20 09:16:00.418665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.509 [2024-11-20 09:16:00.418678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.509 [2024-11-20 09:16:00.430670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.509 [2024-11-20 09:16:00.430686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.509 [2024-11-20 09:16:00.442668] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.509 [2024-11-20 09:16:00.442682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.509 [2024-11-20 09:16:00.454664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.509 [2024-11-20 09:16:00.454674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.509 [2024-11-20 09:16:00.466663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.509 [2024-11-20 09:16:00.466672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.509 [2024-11-20 09:16:00.478665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.509 [2024-11-20 09:16:00.478679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.509 [2024-11-20 09:16:00.490661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.509 [2024-11-20 09:16:00.490671] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.509 [2024-11-20 09:16:00.502665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.509 [2024-11-20 09:16:00.502676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.509 [2024-11-20 09:16:00.514664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.509 [2024-11-20 09:16:00.514675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.509 [2024-11-20 09:16:00.526667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.509 [2024-11-20 09:16:00.526682] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.509 [2024-11-20 09:16:00.538662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.509 [2024-11-20 09:16:00.538673] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.768 [2024-11-20 09:16:00.550664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.768 [2024-11-20 09:16:00.550677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.768 [2024-11-20 09:16:00.562664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.768 [2024-11-20 09:16:00.562676] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.768 [2024-11-20 09:16:00.574673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.768 [2024-11-20 09:16:00.574692] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.768 [2024-11-20 09:16:00.586667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.768 [2024-11-20 09:16:00.586683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.768 Running I/O for 5 seconds... 00:31:44.768 [2024-11-20 09:16:00.601360] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.768 [2024-11-20 09:16:00.601382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.768 [2024-11-20 09:16:00.616542] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.769 [2024-11-20 09:16:00.616562] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.769 [2024-11-20 09:16:00.631775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.769 [2024-11-20 09:16:00.631795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.769 [2024-11-20 09:16:00.647080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.769 [2024-11-20 09:16:00.647100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.769 [2024-11-20 09:16:00.658774] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.769 [2024-11-20 09:16:00.658793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.769 [2024-11-20 09:16:00.672587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.769 [2024-11-20 09:16:00.672608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.769 [2024-11-20 09:16:00.687872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.769 [2024-11-20 09:16:00.687893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.769 [2024-11-20 09:16:00.702675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.769 [2024-11-20 09:16:00.702703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.769 [2024-11-20 09:16:00.715624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.769 [2024-11-20 09:16:00.715644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.769 [2024-11-20 09:16:00.728503] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.769 [2024-11-20 09:16:00.728523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.769 [2024-11-20 09:16:00.743797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.769 [2024-11-20 09:16:00.743816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.769 [2024-11-20 09:16:00.758669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.769 [2024-11-20 09:16:00.758688] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.769 [2024-11-20 09:16:00.772791] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.769 [2024-11-20 09:16:00.772812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.769 [2024-11-20 09:16:00.788271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.769 [2024-11-20 09:16:00.788291] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:44.769 [2024-11-20 09:16:00.803109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:44.769 [2024-11-20 09:16:00.803129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.027 [2024-11-20 09:16:00.818618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.027 [2024-11-20 09:16:00.818640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.027 [2024-11-20 09:16:00.832587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.027 [2024-11-20 09:16:00.832607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.027 [2024-11-20 09:16:00.847569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.027 [2024-11-20 09:16:00.847588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.027 [2024-11-20 09:16:00.858189] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.027 [2024-11-20 09:16:00.858209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.027 [2024-11-20 09:16:00.872784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.027 [2024-11-20 09:16:00.872804] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.027 [2024-11-20 09:16:00.888220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.027 [2024-11-20 09:16:00.888240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.027 [2024-11-20 09:16:00.902773] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.027 [2024-11-20 09:16:00.902793] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.027 [2024-11-20 09:16:00.915002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.027 [2024-11-20 09:16:00.915021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.027 [2024-11-20 09:16:00.928821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.027 [2024-11-20 09:16:00.928842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.027 [2024-11-20 09:16:00.944124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.027 [2024-11-20 09:16:00.944144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.027 [2024-11-20 09:16:00.959122] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.027 [2024-11-20 09:16:00.959141] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.028 [2024-11-20 09:16:00.970995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.028 [2024-11-20 09:16:00.971017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.028 [2024-11-20 09:16:00.983974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.028 [2024-11-20 09:16:00.983993] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.028 [2024-11-20 09:16:00.999404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.028 [2024-11-20 09:16:00.999424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.028 [2024-11-20 09:16:01.014356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.028 [2024-11-20 09:16:01.014377] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.028 [2024-11-20 09:16:01.027723] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.028 [2024-11-20 09:16:01.027743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.028 [2024-11-20 09:16:01.038508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.028 [2024-11-20 09:16:01.038528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.028 [2024-11-20 09:16:01.052988] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.028 [2024-11-20 09:16:01.053008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.287 [2024-11-20 09:16:01.068769] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.287 [2024-11-20 09:16:01.068790] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.287 [2024-11-20 09:16:01.083959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.287 [2024-11-20 09:16:01.083980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.287 [2024-11-20 09:16:01.099483] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.287 [2024-11-20 09:16:01.099503] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.287 [2024-11-20 09:16:01.114817] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.287 [2024-11-20 09:16:01.114838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.287 [2024-11-20 09:16:01.128262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.287 [2024-11-20 09:16:01.128283] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.287 [2024-11-20 09:16:01.138793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.287 [2024-11-20 09:16:01.138813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.287 [2024-11-20 09:16:01.152490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.287 [2024-11-20 09:16:01.152510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.287 [2024-11-20 09:16:01.167498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.287 [2024-11-20 09:16:01.167518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.287 [2024-11-20 09:16:01.182929] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.287 [2024-11-20 09:16:01.182955] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.287 [2024-11-20 09:16:01.196027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.287 [2024-11-20 09:16:01.196046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.287 [2024-11-20 09:16:01.206576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.287 [2024-11-20 09:16:01.206595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.287 [2024-11-20 09:16:01.220914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.287 [2024-11-20 09:16:01.220934] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.287 [2024-11-20 09:16:01.236085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.287 [2024-11-20 09:16:01.236111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.287 [2024-11-20 09:16:01.251229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.287 [2024-11-20 09:16:01.251249] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.287 [2024-11-20 09:16:01.266372] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.287 [2024-11-20 09:16:01.266391] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.287 [2024-11-20 09:16:01.279866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.287 [2024-11-20 09:16:01.279886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.287 [2024-11-20 09:16:01.290840] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.287 [2024-11-20 09:16:01.290860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.287 [2024-11-20 09:16:01.304491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.287 [2024-11-20 09:16:01.304510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.287 [2024-11-20 09:16:01.320017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.287 [2024-11-20 09:16:01.320036] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.545 [2024-11-20 09:16:01.335090] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.545 [2024-11-20 09:16:01.335110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.545 [2024-11-20 09:16:01.347481] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.545 [2024-11-20 09:16:01.347500] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.545 [2024-11-20 09:16:01.360382] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.545 [2024-11-20 09:16:01.360401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.545 [2024-11-20 09:16:01.375426] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.545 [2024-11-20 09:16:01.375445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.545 [2024-11-20 09:16:01.390982] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.546 [2024-11-20 09:16:01.391001] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.546 [2024-11-20 09:16:01.407533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.546 [2024-11-20 09:16:01.407553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.546 [2024-11-20 09:16:01.418223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.546 [2024-11-20 09:16:01.418242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.546 [2024-11-20 09:16:01.432997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.546 [2024-11-20 09:16:01.433017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.546 [2024-11-20 09:16:01.447932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.546 [2024-11-20 09:16:01.447957] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.546 [2024-11-20 09:16:01.462573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.546 [2024-11-20 09:16:01.462591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.546 [2024-11-20 09:16:01.475733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.546 [2024-11-20 09:16:01.475753] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.546 [2024-11-20 09:16:01.490939] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.546 [2024-11-20 09:16:01.490963] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.546 [2024-11-20 09:16:01.502269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.546 [2024-11-20 09:16:01.502293] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.546 [2024-11-20 09:16:01.516484] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.546 [2024-11-20 09:16:01.516504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.546 [2024-11-20 09:16:01.531197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.546 [2024-11-20 09:16:01.531217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.546 [2024-11-20 09:16:01.546472] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.546 [2024-11-20 09:16:01.546493] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.546 [2024-11-20 09:16:01.560462] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.546 [2024-11-20 09:16:01.560482] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.546 [2024-11-20 09:16:01.575848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.546 [2024-11-20 09:16:01.575867] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.804 [2024-11-20 09:16:01.591191] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.804 [2024-11-20 09:16:01.591210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.804 16278.00 IOPS, 127.17 MiB/s [2024-11-20T08:16:01.845Z] [2024-11-20 09:16:01.607027] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.804 [2024-11-20 09:16:01.607046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.804 [2024-11-20 09:16:01.622518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.804 [2024-11-20 09:16:01.622538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.804 [2024-11-20 09:16:01.635038] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.804 [2024-11-20 09:16:01.635057] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.804 [2024-11-20 09:16:01.648289] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.804 [2024-11-20 09:16:01.648309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.804 [2024-11-20 09:16:01.663969] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.804 [2024-11-20 09:16:01.663989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.804 [2024-11-20 09:16:01.678764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.804 [2024-11-20 09:16:01.678783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.804 [2024-11-20 09:16:01.691161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.804 [2024-11-20 09:16:01.691179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.804 [2024-11-20 09:16:01.704786] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.804 [2024-11-20 09:16:01.704805] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.804 [2024-11-20 09:16:01.719764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.804 [2024-11-20 09:16:01.719783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.804 [2024-11-20 09:16:01.734762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.804 [2024-11-20 09:16:01.734781] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.804 [2024-11-20 09:16:01.747906] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.804 [2024-11-20 09:16:01.747926] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.804 [2024-11-20 09:16:01.759024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.804 [2024-11-20 09:16:01.759041] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.804 [2024-11-20 09:16:01.772685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.804 [2024-11-20 09:16:01.772703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.804 [2024-11-20 09:16:01.788003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.804 [2024-11-20 09:16:01.788022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.804 [2024-11-20 09:16:01.802724] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.804 [2024-11-20 09:16:01.802744] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.804 [2024-11-20 09:16:01.815418] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.804 [2024-11-20 09:16:01.815437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.804 [2024-11-20 09:16:01.828044] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.804 [2024-11-20 09:16:01.828063] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:45.804 [2024-11-20 09:16:01.843009] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:45.804 [2024-11-20 09:16:01.843029] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.063 [2024-11-20 09:16:01.854347] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.063 [2024-11-20 09:16:01.854368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.063 [2024-11-20 09:16:01.869162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.063 [2024-11-20 09:16:01.869182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.063 [2024-11-20 09:16:01.884639] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.063 [2024-11-20 09:16:01.884658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.063 [2024-11-20 09:16:01.900167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.063 [2024-11-20 09:16:01.900186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.063 [2024-11-20 09:16:01.914936] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.063 [2024-11-20 09:16:01.914961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.063 [2024-11-20 09:16:01.926442] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.063 [2024-11-20 09:16:01.926462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.063 [2024-11-20 09:16:01.940960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.063 [2024-11-20 09:16:01.940980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.063 [2024-11-20 09:16:01.956121] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.063 [2024-11-20 09:16:01.956140] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.063 [2024-11-20 09:16:01.970934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.063 [2024-11-20 09:16:01.970959] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.063 [2024-11-20 09:16:01.981604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.063 [2024-11-20 09:16:01.981623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.063 [2024-11-20 09:16:01.996844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.063 [2024-11-20 09:16:01.996863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.063 [2024-11-20 09:16:02.012075] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.063 [2024-11-20 09:16:02.012094] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.063 [2024-11-20 09:16:02.026681] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.063 [2024-11-20 09:16:02.026700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.063 [2024-11-20 09:16:02.039498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.063 [2024-11-20 09:16:02.039517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.063 [2024-11-20 09:16:02.052397] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.063 [2024-11-20 09:16:02.052416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.063 [2024-11-20 09:16:02.067853] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.063 [2024-11-20 09:16:02.067873] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.063 [2024-11-20 09:16:02.083002] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.063 [2024-11-20 09:16:02.083021] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.063 [2024-11-20 09:16:02.099240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.063 [2024-11-20 09:16:02.099260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.321 [2024-11-20 09:16:02.115711] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.321 [2024-11-20 09:16:02.115733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.321 [2024-11-20 09:16:02.130801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.321 [2024-11-20 09:16:02.130822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.321 [2024-11-20 09:16:02.143296] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.321 [2024-11-20 09:16:02.143316] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.321 [2024-11-20 09:16:02.158606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.321 [2024-11-20 09:16:02.158627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.321 [2024-11-20 09:16:02.170960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.321 [2024-11-20 09:16:02.170979] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.321 [2024-11-20 09:16:02.184827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.322 [2024-11-20 09:16:02.184847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.322 [2024-11-20 09:16:02.200468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.322 [2024-11-20 09:16:02.200488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.322 [2024-11-20 09:16:02.215696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.322 [2024-11-20 09:16:02.215715] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.322 [2024-11-20 09:16:02.230830] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.322 [2024-11-20 09:16:02.230850] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.322 [2024-11-20 09:16:02.241645] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.322 [2024-11-20 09:16:02.241664] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.322 [2024-11-20 09:16:02.256701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.322 [2024-11-20 09:16:02.256722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.322 [2024-11-20 09:16:02.271669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.322 [2024-11-20 09:16:02.271689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.322 [2024-11-20 09:16:02.286736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.322 [2024-11-20 09:16:02.286757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.322 [2024-11-20 09:16:02.298941] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.322 [2024-11-20 09:16:02.298970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.322 [2024-11-20 09:16:02.313252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.322 [2024-11-20 09:16:02.313271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.322 [2024-11-20 09:16:02.328185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.322 [2024-11-20 09:16:02.328216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.322 [2024-11-20 09:16:02.343491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.322 [2024-11-20 09:16:02.343512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.322 [2024-11-20 09:16:02.359385] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.322 [2024-11-20 09:16:02.359405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.580 [2024-11-20 09:16:02.374494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.580 [2024-11-20 09:16:02.374514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.580 [2024-11-20 09:16:02.388299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.580 [2024-11-20 09:16:02.388319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.580 [2024-11-20 09:16:02.403341] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.580 [2024-11-20 09:16:02.403361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.580 [2024-11-20 09:16:02.418870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.580 [2024-11-20 09:16:02.418891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.580 [2024-11-20 09:16:02.432782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.580 [2024-11-20 09:16:02.432802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.580 [2024-11-20 09:16:02.447765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.580 [2024-11-20 09:16:02.447786] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.580 [2024-11-20 09:16:02.462782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.580 [2024-11-20 09:16:02.462801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.580 [2024-11-20 09:16:02.475422] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.580 [2024-11-20 09:16:02.475440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.580 [2024-11-20 09:16:02.488054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.580 [2024-11-20 09:16:02.488072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.580 [2024-11-20 09:16:02.503618] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.580 [2024-11-20 09:16:02.503637] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.580 [2024-11-20 09:16:02.518545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.580 [2024-11-20 09:16:02.518565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.580 [2024-11-20 09:16:02.532682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.580 [2024-11-20 09:16:02.532700] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.580 [2024-11-20 09:16:02.548137] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.580 [2024-11-20 09:16:02.548157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.580 [2024-11-20 09:16:02.563024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.580 [2024-11-20 09:16:02.563042] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.580 [2024-11-20 09:16:02.574679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.580 [2024-11-20 09:16:02.574702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.580 [2024-11-20 09:16:02.588730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.580 [2024-11-20 09:16:02.588748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.580 16286.00 IOPS, 127.23 MiB/s [2024-11-20T08:16:02.621Z] [2024-11-20 09:16:02.603717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.580 [2024-11-20 09:16:02.603736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.580 [2024-11-20 09:16:02.618984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.580 [2024-11-20 09:16:02.619002] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.838 [2024-11-20 09:16:02.633820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.838 [2024-11-20 09:16:02.633839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.838 [2024-11-20 09:16:02.647873] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.838 [2024-11-20 09:16:02.647891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.838 [2024-11-20 09:16:02.663330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.838 [2024-11-20 09:16:02.663348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.838 [2024-11-20 09:16:02.678298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.838 [2024-11-20 09:16:02.678318] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.838 [2024-11-20 09:16:02.691356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.838 [2024-11-20 09:16:02.691374] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.838 [2024-11-20 09:16:02.704852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.838 [2024-11-20 09:16:02.704870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.839 [2024-11-20 09:16:02.719810] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.839 [2024-11-20 09:16:02.719829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.839 [2024-11-20 09:16:02.734852] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.839 [2024-11-20 09:16:02.734871] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.839 [2024-11-20 09:16:02.745635] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.839 [2024-11-20 09:16:02.745653] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.839 [2024-11-20 09:16:02.760869] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.839 [2024-11-20 09:16:02.760888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.839 [2024-11-20 09:16:02.776100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.839 [2024-11-20 09:16:02.776118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.839 [2024-11-20 09:16:02.791250] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.839 [2024-11-20 09:16:02.791269] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.839 [2024-11-20 09:16:02.806492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.839 [2024-11-20 09:16:02.806510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.839 [2024-11-20 09:16:02.820636] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.839 [2024-11-20 09:16:02.820654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.839 [2024-11-20 09:16:02.835975] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.839 [2024-11-20 09:16:02.835994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.839 [2024-11-20 09:16:02.850644] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.839 [2024-11-20 09:16:02.850668] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.839 [2024-11-20 09:16:02.861547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.839 [2024-11-20 09:16:02.861566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:46.839 [2024-11-20 09:16:02.877401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:46.839 [2024-11-20 09:16:02.877419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.097 [2024-11-20 09:16:02.891594] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.098 [2024-11-20 09:16:02.891613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.098 [2024-11-20 09:16:02.907053] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.098 [2024-11-20 09:16:02.907072] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.098 [2024-11-20 09:16:02.923454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.098 [2024-11-20 09:16:02.923473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.098 [2024-11-20 09:16:02.939293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.098 [2024-11-20 09:16:02.939311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.098 [2024-11-20 09:16:02.954743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.098 [2024-11-20 09:16:02.954761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.098 [2024-11-20 09:16:02.968512] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.098 [2024-11-20 09:16:02.968529] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.098 [2024-11-20 09:16:02.983551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.098 [2024-11-20 09:16:02.983569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.098 [2024-11-20 09:16:02.999272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.098 [2024-11-20 09:16:02.999290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.098 [2024-11-20 09:16:03.011239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.098 [2024-11-20 09:16:03.011257] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.098 [2024-11-20 09:16:03.024532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.098 [2024-11-20 09:16:03.024550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.098 [2024-11-20 09:16:03.039946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.098 [2024-11-20 09:16:03.039968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.098 [2024-11-20 09:16:03.055271] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.098 [2024-11-20 09:16:03.055289] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.098 [2024-11-20 09:16:03.065887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.098 [2024-11-20 09:16:03.065905] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.098 [2024-11-20 09:16:03.080714] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.098 [2024-11-20 09:16:03.080733] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.098 [2024-11-20 09:16:03.095620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.098 [2024-11-20 09:16:03.095638] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.098 [2024-11-20 09:16:03.110561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.098 [2024-11-20 09:16:03.110580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.098 [2024-11-20 09:16:03.123910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.098 [2024-11-20 09:16:03.123929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.098 [2024-11-20 09:16:03.134211] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.098 [2024-11-20 09:16:03.134230] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.357 [2024-11-20 09:16:03.148811] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.357 [2024-11-20 09:16:03.148832] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.357 [2024-11-20 09:16:03.163910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.357 [2024-11-20 09:16:03.163930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.357 [2024-11-20 09:16:03.178848] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.357 [2024-11-20 09:16:03.178868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.357 [2024-11-20 09:16:03.191750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.357 [2024-11-20 09:16:03.191769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.357 [2024-11-20 09:16:03.203491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.357 [2024-11-20 09:16:03.203510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.357 [2024-11-20 09:16:03.216502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.357 [2024-11-20 09:16:03.216521] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.357 [2024-11-20 09:16:03.231255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.357 [2024-11-20 09:16:03.231273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.357 [2024-11-20 09:16:03.243174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.357 [2024-11-20 09:16:03.243192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.357 [2024-11-20 09:16:03.256066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.357 [2024-11-20 09:16:03.256084] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.357 [2024-11-20 09:16:03.271591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.357 [2024-11-20 09:16:03.271609] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.357 [2024-11-20 09:16:03.287533] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.357 [2024-11-20 09:16:03.287550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.357 [2024-11-20 09:16:03.302974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.357 [2024-11-20 09:16:03.302992] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.357 [2024-11-20 09:16:03.314170] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.357 [2024-11-20 09:16:03.314189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.357 [2024-11-20 09:16:03.328434] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.357 [2024-11-20 09:16:03.328452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.357 [2024-11-20 09:16:03.343457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.357 [2024-11-20 09:16:03.343475] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.357 [2024-11-20 09:16:03.359029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.357 [2024-11-20 09:16:03.359047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.357 [2024-11-20 09:16:03.374496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.357 [2024-11-20 09:16:03.374514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.357 [2024-11-20 09:16:03.387730] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.357 [2024-11-20 09:16:03.387748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.616 [2024-11-20 09:16:03.403118] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.616 [2024-11-20 09:16:03.403137] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.616 [2024-11-20 09:16:03.415660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.616 [2024-11-20 09:16:03.415678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.616 [2024-11-20 09:16:03.427278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.616 [2024-11-20 09:16:03.427297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.616 [2024-11-20 09:16:03.443124] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.616 [2024-11-20 09:16:03.443143] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.616 [2024-11-20 09:16:03.459020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.616 [2024-11-20 09:16:03.459038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.616 [2024-11-20 09:16:03.474761] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.616 [2024-11-20 09:16:03.474780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.616 [2024-11-20 09:16:03.488595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.616 [2024-11-20 09:16:03.488613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.616 [2024-11-20 09:16:03.504134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.616 [2024-11-20 09:16:03.504152] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.616 [2024-11-20 09:16:03.518493] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.616 [2024-11-20 09:16:03.518511] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.616 [2024-11-20 09:16:03.531733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.616 [2024-11-20 09:16:03.531752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.616 [2024-11-20 09:16:03.543371] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.616 [2024-11-20 09:16:03.543389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.616 [2024-11-20 09:16:03.556320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.616 [2024-11-20 09:16:03.556339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.616 [2024-11-20 09:16:03.571620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.616 [2024-11-20 09:16:03.571639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.616 [2024-11-20 09:16:03.586990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.616 [2024-11-20 09:16:03.587009] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.616 16262.67 IOPS, 127.05 MiB/s [2024-11-20T08:16:03.657Z] [2024-11-20 09:16:03.602745] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.616 [2024-11-20 09:16:03.602764] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.616 [2024-11-20 09:16:03.616057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.616 [2024-11-20 09:16:03.616078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.616 [2024-11-20 09:16:03.631316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.616 [2024-11-20 09:16:03.631335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.616 [2024-11-20 09:16:03.646159] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.616 [2024-11-20 09:16:03.646179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.875 [2024-11-20 09:16:03.659826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.875 [2024-11-20 09:16:03.659847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.875 [2024-11-20 09:16:03.671277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.875 [2024-11-20 09:16:03.671296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.875 [2024-11-20 09:16:03.683945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.875 [2024-11-20 09:16:03.683970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.875 [2024-11-20 09:16:03.694536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.875 [2024-11-20 09:16:03.694555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.875 [2024-11-20 09:16:03.708429] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.875 [2024-11-20 09:16:03.708448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.875 [2024-11-20 09:16:03.723286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.875 [2024-11-20 09:16:03.723305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.875 [2024-11-20 09:16:03.735612] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.875 [2024-11-20 09:16:03.735631] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.875 [2024-11-20 09:16:03.750569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.875 [2024-11-20 09:16:03.750588] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.875 [2024-11-20 09:16:03.763277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.875 [2024-11-20 09:16:03.763295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.875 [2024-11-20 09:16:03.776519] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.875 [2024-11-20 09:16:03.776538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.875 [2024-11-20 09:16:03.791438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.875 [2024-11-20 09:16:03.791456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.875 [2024-11-20 09:16:03.806386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.875 [2024-11-20 09:16:03.806405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.875 [2024-11-20 09:16:03.818849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.875 [2024-11-20 09:16:03.818868] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.875 [2024-11-20 09:16:03.833057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.875 [2024-11-20 09:16:03.833077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.875 [2024-11-20 09:16:03.848019] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.875 [2024-11-20 09:16:03.848039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.875 [2024-11-20 09:16:03.863081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.875 [2024-11-20 09:16:03.863099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.875 [2024-11-20 09:16:03.878576] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.875 [2024-11-20 09:16:03.878595] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.875 [2024-11-20 09:16:03.892178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.875 [2024-11-20 09:16:03.892196] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:47.875 [2024-11-20 09:16:03.907850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:47.875 [2024-11-20 09:16:03.907874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.133 [2024-11-20 09:16:03.923054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.133 [2024-11-20 09:16:03.923073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.133 [2024-11-20 09:16:03.938966] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.133 [2024-11-20 09:16:03.938985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.133 [2024-11-20 09:16:03.954995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.133 [2024-11-20 09:16:03.955015] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.133 [2024-11-20 09:16:03.970578] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.133 [2024-11-20 09:16:03.970598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.133 [2024-11-20 09:16:03.984507] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.133 [2024-11-20 09:16:03.984526] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.133 [2024-11-20 09:16:03.999793] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.133 [2024-11-20 09:16:03.999812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.133 [2024-11-20 09:16:04.014778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.133 [2024-11-20 09:16:04.014797] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.133 [2024-11-20 09:16:04.025182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.133 [2024-11-20 09:16:04.025200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.133 [2024-11-20 09:16:04.040101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.133 [2024-11-20 09:16:04.040130] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.133 [2024-11-20 09:16:04.054960] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.133 [2024-11-20 09:16:04.054977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.133 [2024-11-20 09:16:04.070520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.133 [2024-11-20 09:16:04.070539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.133 [2024-11-20 09:16:04.084375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.133 [2024-11-20 09:16:04.084393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.133 [2024-11-20 09:16:04.099794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.133 [2024-11-20 09:16:04.099812] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.133 [2024-11-20 09:16:04.114834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.133 [2024-11-20 09:16:04.114853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.133 [2024-11-20 09:16:04.127783] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.133 [2024-11-20 09:16:04.127801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.133 [2024-11-20 09:16:04.139433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.133 [2024-11-20 09:16:04.139451] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.133 [2024-11-20 09:16:04.154505] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.133 [2024-11-20 09:16:04.154525] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.133 [2024-11-20 09:16:04.165756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.133 [2024-11-20 09:16:04.165776] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.392 [2024-11-20 09:16:04.180525] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.392 [2024-11-20 09:16:04.180550] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.392 [2024-11-20 09:16:04.195450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.392 [2024-11-20 09:16:04.195469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.392 [2024-11-20 09:16:04.210805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.392 [2024-11-20 09:16:04.210824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.392 [2024-11-20 09:16:04.223225] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.392 [2024-11-20 09:16:04.223244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.392 [2024-11-20 09:16:04.236111] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.392 [2024-11-20 09:16:04.236129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.392 [2024-11-20 09:16:04.251456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.392 [2024-11-20 09:16:04.251474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.392 [2024-11-20 09:16:04.266771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.392 [2024-11-20 09:16:04.266789] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.392 [2024-11-20 09:16:04.278219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.392 [2024-11-20 09:16:04.278238] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.392 [2024-11-20 09:16:04.292912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.392 [2024-11-20 09:16:04.292931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.392 [2024-11-20 09:16:04.308291] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.392 [2024-11-20 09:16:04.308310] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.392 [2024-11-20 09:16:04.323575] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.392 [2024-11-20 09:16:04.323593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.392 [2024-11-20 09:16:04.338593] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.392 [2024-11-20 09:16:04.338613] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.392 [2024-11-20 09:16:04.352020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.392 [2024-11-20 09:16:04.352038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.392 [2024-11-20 09:16:04.366821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.392 [2024-11-20 09:16:04.366839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.392 [2024-11-20 09:16:04.380679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.392 [2024-11-20 09:16:04.380697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.392 [2024-11-20 09:16:04.396335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.392 [2024-11-20 09:16:04.396353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.392 [2024-11-20 09:16:04.411101] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.392 [2024-11-20 09:16:04.411118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.392 [2024-11-20 09:16:04.426277] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.392 [2024-11-20 09:16:04.426296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.651 [2024-11-20 09:16:04.441073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.651 [2024-11-20 09:16:04.441092] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.651 [2024-11-20 09:16:04.456236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.651 [2024-11-20 09:16:04.456259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.651 [2024-11-20 09:16:04.471283] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.651 [2024-11-20 09:16:04.471302] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.651 [2024-11-20 09:16:04.486632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.651 [2024-11-20 09:16:04.486650] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.651 [2024-11-20 09:16:04.500444] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.651 [2024-11-20 09:16:04.500462] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.651 [2024-11-20 09:16:04.516125] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.651 [2024-11-20 09:16:04.516144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.651 [2024-11-20 09:16:04.531671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.651 [2024-11-20 09:16:04.531690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.651 [2024-11-20 09:16:04.546415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.651 [2024-11-20 09:16:04.546434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.651 [2024-11-20 09:16:04.560091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.651 [2024-11-20 09:16:04.560110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.651 [2024-11-20 09:16:04.575192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.651 [2024-11-20 09:16:04.575210] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.651 [2024-11-20 09:16:04.590328] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.651 [2024-11-20 09:16:04.590348] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.651 16277.75 IOPS, 127.17 MiB/s [2024-11-20T08:16:04.692Z] [2024-11-20 09:16:04.604066] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.651 [2024-11-20 09:16:04.604085] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.651 [2024-11-20 09:16:04.619190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.651 [2024-11-20 09:16:04.619218] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.651 [2024-11-20 09:16:04.634770] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.651 [2024-11-20 09:16:04.634788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.651 [2024-11-20 09:16:04.647725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.651 [2024-11-20 09:16:04.647743] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.651 [2024-11-20 09:16:04.662620] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.651 [2024-11-20 09:16:04.662639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.651 [2024-11-20 09:16:04.673865] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.651 [2024-11-20 09:16:04.673883] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.651 [2024-11-20 09:16:04.688757] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.651 [2024-11-20 09:16:04.688775] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.909 [2024-11-20 09:16:04.703905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.909 [2024-11-20 09:16:04.703924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.909 [2024-11-20 09:16:04.719079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.909 [2024-11-20 09:16:04.719097] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.909 [2024-11-20 09:16:04.734549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.909 [2024-11-20 09:16:04.734568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.909 [2024-11-20 09:16:04.747893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.909 [2024-11-20 09:16:04.747911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.909 [2024-11-20 09:16:04.759261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.909 [2024-11-20 09:16:04.759280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.909 [2024-11-20 09:16:04.772392] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.909 [2024-11-20 09:16:04.772411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.909 [2024-11-20 09:16:04.787733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.909 [2024-11-20 09:16:04.787751] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.909 [2024-11-20 09:16:04.802547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.909 [2024-11-20 09:16:04.802565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.909 [2024-11-20 09:16:04.816017] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.909 [2024-11-20 09:16:04.816035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.909 [2024-11-20 09:16:04.826679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.909 [2024-11-20 09:16:04.826697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.909 [2024-11-20 09:16:04.840844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.909 [2024-11-20 09:16:04.840862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.909 [2024-11-20 09:16:04.856275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.909 [2024-11-20 09:16:04.856294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.909 [2024-11-20 09:16:04.871527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.909 [2024-11-20 09:16:04.871546] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.909 [2024-11-20 09:16:04.886373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.909 [2024-11-20 09:16:04.886392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.909 [2024-11-20 09:16:04.900134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.909 [2024-11-20 09:16:04.900153] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.909 [2024-11-20 09:16:04.915456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.909 [2024-11-20 09:16:04.915474] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.909 [2024-11-20 09:16:04.930494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.909 [2024-11-20 09:16:04.930512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:48.909 [2024-11-20 09:16:04.943491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:48.909 [2024-11-20 09:16:04.943510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.168 [2024-11-20 09:16:04.959099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.168 [2024-11-20 09:16:04.959118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.168 [2024-11-20 09:16:04.970138] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.168 [2024-11-20 09:16:04.970157] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.168 [2024-11-20 09:16:04.984545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.168 [2024-11-20 09:16:04.984564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.168 [2024-11-20 09:16:05.000115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.168 [2024-11-20 09:16:05.000133] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.168 [2024-11-20 09:16:05.015147] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.168 [2024-11-20 09:16:05.015167] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.168 [2024-11-20 09:16:05.031132] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.168 [2024-11-20 09:16:05.031150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.168 [2024-11-20 09:16:05.047208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.168 [2024-11-20 09:16:05.047227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.168 [2024-11-20 09:16:05.062787] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.168 [2024-11-20 09:16:05.062808] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.168 [2024-11-20 09:16:05.074352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.168 [2024-11-20 09:16:05.074386] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.168 [2024-11-20 09:16:05.088997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.168 [2024-11-20 09:16:05.089016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.168 [2024-11-20 09:16:05.104248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.168 [2024-11-20 09:16:05.104267] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.168 [2024-11-20 09:16:05.119185] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.168 [2024-11-20 09:16:05.119206] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.168 [2024-11-20 09:16:05.134275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.168 [2024-11-20 09:16:05.134295] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.168 [2024-11-20 09:16:05.148067] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.168 [2024-11-20 09:16:05.148086] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.168 [2024-11-20 09:16:05.159247] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.168 [2024-11-20 09:16:05.159266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.168 [2024-11-20 09:16:05.172205] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.168 [2024-11-20 09:16:05.172223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.168 [2024-11-20 09:16:05.187387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.168 [2024-11-20 09:16:05.187406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.168 [2024-11-20 09:16:05.203241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.168 [2024-11-20 09:16:05.203260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.426 [2024-11-20 09:16:05.218264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.426 [2024-11-20 09:16:05.218284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.426 [2024-11-20 09:16:05.230272] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.426 [2024-11-20 09:16:05.230290] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.426 [2024-11-20 09:16:05.244863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.427 [2024-11-20 09:16:05.244882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.427 [2024-11-20 09:16:05.260357] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.427 [2024-11-20 09:16:05.260375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.427 [2024-11-20 09:16:05.275465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.427 [2024-11-20 09:16:05.275483] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.427 [2024-11-20 09:16:05.290571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.427 [2024-11-20 09:16:05.290590] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.427 [2024-11-20 09:16:05.303762] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.427 [2024-11-20 09:16:05.303780] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.427 [2024-11-20 09:16:05.319145] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.427 [2024-11-20 09:16:05.319165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.427 [2024-11-20 09:16:05.334308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.427 [2024-11-20 09:16:05.334328] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.427 [2024-11-20 09:16:05.348651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.427 [2024-11-20 09:16:05.348670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.427 [2024-11-20 09:16:05.364178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.427 [2024-11-20 09:16:05.364198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.427 [2024-11-20 09:16:05.379313] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.427 [2024-11-20 09:16:05.379332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.427 [2024-11-20 09:16:05.394807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.427 [2024-11-20 09:16:05.394827] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.427 [2024-11-20 09:16:05.408595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.427 [2024-11-20 09:16:05.408615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.427 [2024-11-20 09:16:05.423743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.427 [2024-11-20 09:16:05.423762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.427 [2024-11-20 09:16:05.438584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.427 [2024-11-20 09:16:05.438605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.427 [2024-11-20 09:16:05.451794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.427 [2024-11-20 09:16:05.451814] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.688 [2024-11-20 09:16:05.466897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.688 [2024-11-20 09:16:05.466916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.688 [2024-11-20 09:16:05.477898] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.688 [2024-11-20 09:16:05.477916] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.688 [2024-11-20 09:16:05.492609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.688 [2024-11-20 09:16:05.492627] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.688 [2024-11-20 09:16:05.507522] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.688 [2024-11-20 09:16:05.507541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.688 [2024-11-20 09:16:05.522496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.688 [2024-11-20 09:16:05.522514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.688 [2024-11-20 09:16:05.534841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.688 [2024-11-20 09:16:05.534860] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.688 [2024-11-20 09:16:05.548486] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.688 [2024-11-20 09:16:05.548504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.688 [2024-11-20 09:16:05.563586] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.688 [2024-11-20 09:16:05.563605] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.688 [2024-11-20 09:16:05.573778] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.688 [2024-11-20 09:16:05.573796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.688 [2024-11-20 09:16:05.588841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.688 [2024-11-20 09:16:05.588861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.688 16281.80 IOPS, 127.20 MiB/s [2024-11-20T08:16:05.729Z] [2024-11-20 09:16:05.603794] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.688 [2024-11-20 09:16:05.603813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.688 00:31:49.688 Latency(us) 00:31:49.688 [2024-11-20T08:16:05.729Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:49.688 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:31:49.688 Nvme1n1 : 5.01 16285.13 127.23 0.00 0.00 7852.42 2279.51 13278.16 00:31:49.688 [2024-11-20T08:16:05.729Z] =================================================================================================================== 00:31:49.688 [2024-11-20T08:16:05.729Z] Total : 16285.13 127.23 0.00 0.00 7852.42 2279.51 13278.16 00:31:49.688 [2024-11-20 09:16:05.614674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.689 [2024-11-20 09:16:05.614691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.689 [2024-11-20 09:16:05.626669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.689 [2024-11-20 09:16:05.626685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.689 [2024-11-20 09:16:05.638679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.689 [2024-11-20 09:16:05.638697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.689 [2024-11-20 09:16:05.650670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.689 [2024-11-20 09:16:05.650686] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.689 [2024-11-20 09:16:05.662671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.689 [2024-11-20 09:16:05.662685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.689 [2024-11-20 09:16:05.674664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.689 [2024-11-20 09:16:05.674679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.689 [2024-11-20 09:16:05.686665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.689 [2024-11-20 09:16:05.686680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.689 [2024-11-20 09:16:05.698665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.689 [2024-11-20 09:16:05.698679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.689 [2024-11-20 09:16:05.710664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.689 [2024-11-20 09:16:05.710678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:49.689 [2024-11-20 09:16:05.722666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:49.689 [2024-11-20 09:16:05.722678] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:50.022 [2024-11-20 09:16:05.734677] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:50.022 [2024-11-20 09:16:05.734706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:50.022 [2024-11-20 09:16:05.746673] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:50.022 [2024-11-20 09:16:05.746691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:50.022 [2024-11-20 09:16:05.754664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:50.022 [2024-11-20 09:16:05.754675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:50.022 [2024-11-20 09:16:05.766661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:31:50.022 [2024-11-20 09:16:05.766670] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:50.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 37: kill: (2572813) - No such process 00:31:50.022 09:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@44 -- # wait 2572813 00:31:50.022 09:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@47 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:31:50.022 09:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.022 09:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:50.022 09:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.022 09:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@48 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:31:50.022 09:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.022 09:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:50.022 delay0 00:31:50.022 09:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.023 09:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:31:50.023 09:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:50.023 09:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:50.023 09:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:50.023 09:16:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:31:50.023 [2024-11-20 09:16:05.913640] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:31:58.244 [2024-11-20 09:16:12.763419] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bfc80 is same with the state(6) to be set 00:31:58.244 [2024-11-20 09:16:12.763455] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x12bfc80 is same with the state(6) to be set 00:31:58.244 Initializing NVMe Controllers 00:31:58.244 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:31:58.244 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:31:58.244 Initialization complete. Launching workers. 00:31:58.244 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 4895 00:31:58.244 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 5170, failed to submit 45 00:31:58.244 success 5027, unsuccessful 143, failed 0 00:31:58.244 09:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:31:58.244 09:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@55 -- # nvmftestfini 00:31:58.244 09:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@335 -- # nvmfcleanup 00:31:58.244 09:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@99 -- # sync 00:31:58.244 09:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:31:58.244 09:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@102 -- # set +e 00:31:58.244 09:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@103 -- # for i in {1..20} 00:31:58.244 09:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:31:58.244 rmmod nvme_tcp 00:31:58.244 rmmod nvme_fabrics 00:31:58.244 rmmod nvme_keyring 00:31:58.244 09:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:31:58.244 09:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@106 -- # set -e 00:31:58.244 09:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@107 -- # return 0 00:31:58.244 09:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # '[' -n 2571021 ']' 00:31:58.244 09:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@337 -- # killprocess 2571021 00:31:58.244 09:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 2571021 ']' 00:31:58.244 09:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 2571021 00:31:58.244 09:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:31:58.244 09:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:58.244 09:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2571021 00:31:58.244 09:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:58.244 09:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:58.244 09:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2571021' 00:31:58.244 killing process with pid 2571021 00:31:58.244 09:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 2571021 00:31:58.244 09:16:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 2571021 00:31:58.244 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:31:58.244 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@342 -- # nvmf_fini 00:31:58.244 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@264 -- # local dev 00:31:58.244 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@267 -- # remove_target_ns 00:31:58.244 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:58.244 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:58.244 09:16:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:59.181 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@268 -- # delete_main_bridge 00:31:59.181 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:31:59.181 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@130 -- # return 0 00:31:59.181 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:31:59.181 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:31:59.181 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:31:59.181 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:31:59.181 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:31:59.181 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:31:59.181 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:31:59.181 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:31:59.181 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:31:59.181 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:31:59.181 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:31:59.181 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:31:59.181 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:31:59.181 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:31:59.181 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:31:59.181 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:31:59.181 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:31:59.181 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@41 -- # _dev=0 00:31:59.181 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@41 -- # dev_map=() 00:31:59.181 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@284 -- # iptr 00:31:59.181 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@542 -- # iptables-save 00:31:59.181 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:31:59.181 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@542 -- # iptables-restore 00:31:59.181 00:31:59.181 real 0m32.177s 00:31:59.181 user 0m41.854s 00:31:59.181 sys 0m12.516s 00:31:59.181 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:59.181 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:31:59.181 ************************************ 00:31:59.181 END TEST nvmf_zcopy 00:31:59.181 ************************************ 00:31:59.181 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:31:59.181 00:31:59.181 real 4m25.788s 00:31:59.181 user 9m5.579s 00:31:59.181 sys 1m47.040s 00:31:59.181 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:59.181 09:16:15 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:59.181 ************************************ 00:31:59.181 END TEST nvmf_target_core_interrupt_mode 00:31:59.181 ************************************ 00:31:59.181 09:16:15 nvmf_tcp -- nvmf/nvmf.sh@17 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:59.181 09:16:15 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:59.181 09:16:15 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:59.181 09:16:15 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:31:59.441 ************************************ 00:31:59.441 START TEST nvmf_interrupt 00:31:59.441 ************************************ 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:31:59.441 * Looking for test storage... 00:31:59.441 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:59.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.441 --rc genhtml_branch_coverage=1 00:31:59.441 --rc genhtml_function_coverage=1 00:31:59.441 --rc genhtml_legend=1 00:31:59.441 --rc geninfo_all_blocks=1 00:31:59.441 --rc geninfo_unexecuted_blocks=1 00:31:59.441 00:31:59.441 ' 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:59.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.441 --rc genhtml_branch_coverage=1 00:31:59.441 --rc genhtml_function_coverage=1 00:31:59.441 --rc genhtml_legend=1 00:31:59.441 --rc geninfo_all_blocks=1 00:31:59.441 --rc geninfo_unexecuted_blocks=1 00:31:59.441 00:31:59.441 ' 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:59.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.441 --rc genhtml_branch_coverage=1 00:31:59.441 --rc genhtml_function_coverage=1 00:31:59.441 --rc genhtml_legend=1 00:31:59.441 --rc geninfo_all_blocks=1 00:31:59.441 --rc geninfo_unexecuted_blocks=1 00:31:59.441 00:31:59.441 ' 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:59.441 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:59.441 --rc genhtml_branch_coverage=1 00:31:59.441 --rc genhtml_function_coverage=1 00:31:59.441 --rc genhtml_legend=1 00:31:59.441 --rc geninfo_all_blocks=1 00:31:59.441 --rc geninfo_unexecuted_blocks=1 00:31:59.441 00:31:59.441 ' 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@50 -- # : 0 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@54 -- # have_pci_nics=0 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@296 -- # prepare_net_devs 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # local -g is_hw=no 00:31:59.441 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@260 -- # remove_target_ns 00:31:59.442 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:59.442 09:16:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 14> /dev/null' 00:31:59.442 09:16:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:59.442 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:31:59.442 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:31:59.442 09:16:15 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # xtrace_disable 00:31:59.442 09:16:15 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@131 -- # pci_devs=() 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@131 -- # local -a pci_devs 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@132 -- # pci_net_devs=() 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@133 -- # pci_drivers=() 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@133 -- # local -A pci_drivers 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@135 -- # net_devs=() 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@135 -- # local -ga net_devs 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@136 -- # e810=() 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@136 -- # local -ga e810 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@137 -- # x722=() 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@137 -- # local -ga x722 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@138 -- # mlx=() 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@138 -- # local -ga mlx 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:06.011 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:06.011 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:06.011 Found net devices under 0000:86:00.0: cvl_0_0 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:06.011 Found net devices under 0000:86:00.1: cvl_0_1 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # is_hw=yes 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@257 -- # create_target_ns 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:06.011 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@27 -- # local -gA dev_map 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@28 -- # local -g _dev 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@44 -- # ips=() 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@11 -- # local val=167772161 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:32:06.012 10.0.0.1 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@11 -- # local val=167772162 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:32:06.012 10.0.0.2 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@38 -- # ping_ips 1 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@107 -- # local dev=initiator0 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:32:06.012 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:06.012 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.288 ms 00:32:06.012 00:32:06.012 --- 10.0.0.1 ping statistics --- 00:32:06.012 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.012 rtt min/avg/max/mdev = 0.288/0.288/0.288/0.000 ms 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # get_net_dev target0 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@107 -- # local dev=target0 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:06.012 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:32:06.013 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:06.013 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.124 ms 00:32:06.013 00:32:06.013 --- 10.0.0.2 ping statistics --- 00:32:06.013 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:06.013 rtt min/avg/max/mdev = 0.124/0.124/0.124/0.000 ms 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # (( pair++ )) 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@270 -- # return 0 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@107 -- # local dev=initiator0 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@107 -- # local dev=initiator1 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # return 1 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # dev= 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@169 -- # return 0 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # get_net_dev target0 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@107 -- # local dev=target0 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # get_net_dev target1 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@107 -- # local dev=target1 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # return 1 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # dev= 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@169 -- # return 0 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # nvmfpid=2578425 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@329 -- # waitforlisten 2578425 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 2578425 ']' 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:06.013 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:06.013 [2024-11-20 09:16:21.534989] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:06.013 [2024-11-20 09:16:21.535991] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:32:06.013 [2024-11-20 09:16:21.536033] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:06.013 [2024-11-20 09:16:21.617194] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:06.013 [2024-11-20 09:16:21.659494] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:06.013 [2024-11-20 09:16:21.659531] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:06.013 [2024-11-20 09:16:21.659538] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:06.013 [2024-11-20 09:16:21.659544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:06.013 [2024-11-20 09:16:21.659549] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:06.013 [2024-11-20 09:16:21.660716] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:06.013 [2024-11-20 09:16:21.660719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:06.013 [2024-11-20 09:16:21.728395] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:06.013 [2024-11-20 09:16:21.728941] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:06.013 [2024-11-20 09:16:21.729199] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:06.013 09:16:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:32:06.014 5000+0 records in 00:32:06.014 5000+0 records out 00:32:06.014 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0176367 s, 581 MB/s 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:06.014 AIO0 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:06.014 [2024-11-20 09:16:21.853526] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:06.014 [2024-11-20 09:16:21.893818] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2578425 0 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2578425 0 idle 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2578425 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2578425 -w 256 00:32:06.014 09:16:21 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:06.272 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2578425 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.26 reactor_0' 00:32:06.272 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2578425 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.26 reactor_0 00:32:06.272 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:06.272 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:06.272 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:06.272 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:06.272 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:06.272 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:06.272 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:06.272 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:06.272 09:16:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:06.272 09:16:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 2578425 1 00:32:06.272 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2578425 1 idle 00:32:06.272 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2578425 00:32:06.272 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:06.272 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:06.272 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:06.272 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:06.272 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:06.272 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:06.272 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:06.272 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:06.272 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:06.272 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2578425 -w 256 00:32:06.272 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:06.272 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2578429 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1' 00:32:06.272 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2578429 root 20 0 128.2g 46080 33792 S 0.0 0.0 0:00.00 reactor_1 00:32:06.272 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:06.272 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:06.272 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:06.272 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:06.273 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:06.273 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:06.273 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:06.273 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:06.273 09:16:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:32:06.273 09:16:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=2578471 00:32:06.273 09:16:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:06.273 09:16:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:06.273 09:16:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:06.273 09:16:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2578425 0 00:32:06.273 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2578425 0 busy 00:32:06.273 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2578425 00:32:06.273 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:06.273 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:06.273 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:06.273 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:06.273 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:06.273 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:06.273 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:06.273 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:06.273 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2578425 -w 256 00:32:06.273 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:06.530 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2578425 root 20 0 128.2g 46848 33792 R 66.7 0.0 0:00.36 reactor_0' 00:32:06.530 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2578425 root 20 0 128.2g 46848 33792 R 66.7 0.0 0:00.36 reactor_0 00:32:06.530 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:06.530 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:06.530 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=66.7 00:32:06.530 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=66 00:32:06.530 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:06.530 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:06.530 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:06.530 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:06.530 09:16:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:06.530 09:16:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:06.530 09:16:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 2578425 1 00:32:06.530 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 2578425 1 busy 00:32:06.530 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2578425 00:32:06.530 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:06.530 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:06.530 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:06.530 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:06.530 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:06.530 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:06.530 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:06.530 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:06.530 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2578425 -w 256 00:32:06.530 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:06.788 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2578429 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.23 reactor_1' 00:32:06.788 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2578429 root 20 0 128.2g 46848 33792 R 99.9 0.0 0:00.23 reactor_1 00:32:06.788 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:06.788 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:06.788 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:06.788 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:06.788 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:06.788 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:06.788 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:06.788 09:16:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:06.788 09:16:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 2578471 00:32:16.747 Initializing NVMe Controllers 00:32:16.747 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:16.747 Controller IO queue size 256, less than required. 00:32:16.747 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:16.747 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:16.747 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:16.747 Initialization complete. Launching workers. 00:32:16.747 ======================================================== 00:32:16.747 Latency(us) 00:32:16.747 Device Information : IOPS MiB/s Average min max 00:32:16.747 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 15924.70 62.21 16082.90 3385.92 30823.34 00:32:16.747 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16086.80 62.84 15918.22 7498.51 27352.79 00:32:16.747 ======================================================== 00:32:16.747 Total : 32011.50 125.04 16000.14 3385.92 30823.34 00:32:16.747 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2578425 0 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2578425 0 idle 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2578425 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2578425 -w 256 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2578425 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.25 reactor_0' 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2578425 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:20.25 reactor_0 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 2578425 1 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2578425 1 idle 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2578425 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2578425 -w 256 00:32:16.747 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:17.007 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2578429 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1' 00:32:17.007 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2578429 root 20 0 128.2g 46848 33792 S 0.0 0.0 0:10.00 reactor_1 00:32:17.007 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:17.007 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:17.007 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:17.007 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:17.007 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:17.007 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:17.007 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:17.007 09:16:32 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:17.007 09:16:32 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:17.266 09:16:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:17.266 09:16:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:17.266 09:16:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:17.266 09:16:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:17.266 09:16:33 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2578425 0 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2578425 0 idle 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2578425 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2578425 -w 256 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2578425 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.50 reactor_0' 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2578425 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:20.50 reactor_0 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 2578425 1 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 2578425 1 idle 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=2578425 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 2578425 -w 256 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='2578429 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.10 reactor_1' 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 2578429 root 20 0 128.2g 72960 33792 S 0.0 0.0 0:10.10 reactor_1 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:19.799 09:16:35 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:19.800 09:16:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:20.059 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:20.059 09:16:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:20.059 09:16:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:32:20.059 09:16:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:20.059 09:16:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:20.059 09:16:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:20.059 09:16:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:20.059 09:16:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:32:20.059 09:16:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:20.059 09:16:35 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:20.059 09:16:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@335 -- # nvmfcleanup 00:32:20.059 09:16:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@99 -- # sync 00:32:20.059 09:16:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:32:20.059 09:16:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@102 -- # set +e 00:32:20.059 09:16:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@103 -- # for i in {1..20} 00:32:20.059 09:16:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:32:20.059 rmmod nvme_tcp 00:32:20.059 rmmod nvme_fabrics 00:32:20.059 rmmod nvme_keyring 00:32:20.059 09:16:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:32:20.059 09:16:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@106 -- # set -e 00:32:20.059 09:16:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@107 -- # return 0 00:32:20.059 09:16:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # '[' -n 2578425 ']' 00:32:20.059 09:16:35 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@337 -- # killprocess 2578425 00:32:20.059 09:16:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 2578425 ']' 00:32:20.059 09:16:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 2578425 00:32:20.059 09:16:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:20.059 09:16:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:20.059 09:16:35 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2578425 00:32:20.059 09:16:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:20.059 09:16:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:20.059 09:16:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2578425' 00:32:20.059 killing process with pid 2578425 00:32:20.059 09:16:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 2578425 00:32:20.059 09:16:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 2578425 00:32:20.318 09:16:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:32:20.318 09:16:36 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@342 -- # nvmf_fini 00:32:20.318 09:16:36 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@264 -- # local dev 00:32:20.318 09:16:36 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@267 -- # remove_target_ns 00:32:20.318 09:16:36 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:20.318 09:16:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 14> /dev/null' 00:32:20.318 09:16:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:22.854 09:16:38 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@268 -- # delete_main_bridge 00:32:22.854 09:16:38 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:32:22.854 09:16:38 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@130 -- # return 0 00:32:22.854 09:16:38 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:32:22.854 09:16:38 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:32:22.854 09:16:38 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:32:22.854 09:16:38 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:32:22.854 09:16:38 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:32:22.854 09:16:38 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:32:22.854 09:16:38 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:32:22.854 09:16:38 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:32:22.854 09:16:38 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:32:22.854 09:16:38 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:32:22.854 09:16:38 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:32:22.854 09:16:38 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:32:22.854 09:16:38 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:32:22.854 09:16:38 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:32:22.854 09:16:38 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:32:22.854 09:16:38 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:32:22.854 09:16:38 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:32:22.854 09:16:38 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@41 -- # _dev=0 00:32:22.854 09:16:38 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@41 -- # dev_map=() 00:32:22.854 09:16:38 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@284 -- # iptr 00:32:22.854 09:16:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@542 -- # iptables-save 00:32:22.854 09:16:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:32:22.854 09:16:38 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@542 -- # iptables-restore 00:32:22.854 00:32:22.854 real 0m23.064s 00:32:22.854 user 0m39.716s 00:32:22.854 sys 0m8.548s 00:32:22.854 09:16:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:22.854 09:16:38 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:22.854 ************************************ 00:32:22.854 END TEST nvmf_interrupt 00:32:22.854 ************************************ 00:32:22.854 00:32:22.854 real 27m40.883s 00:32:22.854 user 57m14.105s 00:32:22.854 sys 9m16.854s 00:32:22.854 09:16:38 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:22.854 09:16:38 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:22.854 ************************************ 00:32:22.854 END TEST nvmf_tcp 00:32:22.854 ************************************ 00:32:22.854 09:16:38 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:32:22.854 09:16:38 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:22.854 09:16:38 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:22.854 09:16:38 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:22.854 09:16:38 -- common/autotest_common.sh@10 -- # set +x 00:32:22.854 ************************************ 00:32:22.854 START TEST spdkcli_nvmf_tcp 00:32:22.854 ************************************ 00:32:22.854 09:16:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:22.854 * Looking for test storage... 00:32:22.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:22.854 09:16:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:22.854 09:16:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:32:22.854 09:16:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:22.854 09:16:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:22.854 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:22.854 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:22.854 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:22.854 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:22.854 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:22.854 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:22.854 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:22.854 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:22.854 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:22.854 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:22.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.855 --rc genhtml_branch_coverage=1 00:32:22.855 --rc genhtml_function_coverage=1 00:32:22.855 --rc genhtml_legend=1 00:32:22.855 --rc geninfo_all_blocks=1 00:32:22.855 --rc geninfo_unexecuted_blocks=1 00:32:22.855 00:32:22.855 ' 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:22.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.855 --rc genhtml_branch_coverage=1 00:32:22.855 --rc genhtml_function_coverage=1 00:32:22.855 --rc genhtml_legend=1 00:32:22.855 --rc geninfo_all_blocks=1 00:32:22.855 --rc geninfo_unexecuted_blocks=1 00:32:22.855 00:32:22.855 ' 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:22.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.855 --rc genhtml_branch_coverage=1 00:32:22.855 --rc genhtml_function_coverage=1 00:32:22.855 --rc genhtml_legend=1 00:32:22.855 --rc geninfo_all_blocks=1 00:32:22.855 --rc geninfo_unexecuted_blocks=1 00:32:22.855 00:32:22.855 ' 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:22.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:22.855 --rc genhtml_branch_coverage=1 00:32:22.855 --rc genhtml_function_coverage=1 00:32:22.855 --rc genhtml_legend=1 00:32:22.855 --rc geninfo_all_blocks=1 00:32:22.855 --rc geninfo_unexecuted_blocks=1 00:32:22.855 00:32:22.855 ' 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- nvmf/common.sh@50 -- # : 0 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:32:22.855 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- nvmf/common.sh@54 -- # have_pci_nics=0 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2581230 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2581230 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 2581230 ']' 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:22.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:22.855 09:16:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:22.855 [2024-11-20 09:16:38.691435] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:32:22.855 [2024-11-20 09:16:38.691486] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2581230 ] 00:32:22.855 [2024-11-20 09:16:38.763960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:22.855 [2024-11-20 09:16:38.808054] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:22.855 [2024-11-20 09:16:38.808063] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:23.114 09:16:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:23.114 09:16:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:23.114 09:16:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:23.114 09:16:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:23.114 09:16:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:23.114 09:16:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:23.114 09:16:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:23.114 09:16:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:23.114 09:16:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:23.114 09:16:38 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:23.114 09:16:38 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:23.114 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:23.114 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:23.114 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:23.114 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:23.114 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:23.114 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:23.114 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:23.114 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:23.114 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:23.114 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:23.114 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:23.114 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:23.114 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:23.114 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:23.114 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:23.114 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:23.114 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:23.114 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:23.114 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:23.114 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:23.114 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:23.114 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:23.114 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:23.115 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:23.115 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:23.115 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:23.115 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:23.115 ' 00:32:25.642 [2024-11-20 09:16:41.640683] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:27.018 [2024-11-20 09:16:42.977128] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:32:29.547 [2024-11-20 09:16:45.464766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:32:32.077 [2024-11-20 09:16:47.615506] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:32:33.451 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:32:33.451 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:32:33.451 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:32:33.451 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:32:33.451 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:32:33.451 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:32:33.451 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:32:33.451 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:33.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:32:33.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:32:33.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:33.451 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:33.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:32:33.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:33.451 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:33.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:32:33.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:32:33.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:33.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:32:33.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:33.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:32:33.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:32:33.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:32:33.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:32:33.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:32:33.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:32:33.451 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:32:33.451 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:32:33.451 09:16:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:32:33.451 09:16:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:33.451 09:16:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:33.451 09:16:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:32:33.451 09:16:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:33.451 09:16:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:33.451 09:16:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:32:33.451 09:16:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:32:34.018 09:16:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:32:34.018 09:16:49 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:32:34.018 09:16:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:32:34.018 09:16:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:34.018 09:16:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:34.018 09:16:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:32:34.018 09:16:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:34.018 09:16:49 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:34.018 09:16:49 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:32:34.018 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:32:34.018 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:34.018 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:32:34.018 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:32:34.018 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:32:34.018 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:32:34.018 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:32:34.018 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:32:34.018 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:32:34.018 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:32:34.018 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:32:34.018 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:32:34.018 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:32:34.018 ' 00:32:40.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:32:40.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:32:40.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:40.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:32:40.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:32:40.577 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:32:40.577 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:32:40.577 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:32:40.577 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:32:40.577 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:32:40.577 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:32:40.577 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:32:40.577 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:32:40.577 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:32:40.577 09:16:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:32:40.577 09:16:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:40.577 09:16:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:40.577 09:16:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2581230 00:32:40.577 09:16:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2581230 ']' 00:32:40.577 09:16:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2581230 00:32:40.577 09:16:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:32:40.577 09:16:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:40.577 09:16:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2581230 00:32:40.577 09:16:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:40.577 09:16:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:40.577 09:16:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2581230' 00:32:40.577 killing process with pid 2581230 00:32:40.577 09:16:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 2581230 00:32:40.577 09:16:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 2581230 00:32:40.577 09:16:55 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:32:40.577 09:16:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:32:40.577 09:16:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2581230 ']' 00:32:40.577 09:16:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2581230 00:32:40.577 09:16:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 2581230 ']' 00:32:40.577 09:16:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 2581230 00:32:40.577 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2581230) - No such process 00:32:40.577 09:16:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 2581230 is not found' 00:32:40.577 Process with pid 2581230 is not found 00:32:40.577 09:16:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:32:40.577 09:16:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:32:40.577 09:16:55 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:32:40.577 00:32:40.577 real 0m17.341s 00:32:40.577 user 0m38.215s 00:32:40.577 sys 0m0.787s 00:32:40.577 09:16:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:40.577 09:16:55 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:40.577 ************************************ 00:32:40.577 END TEST spdkcli_nvmf_tcp 00:32:40.577 ************************************ 00:32:40.577 09:16:55 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:40.577 09:16:55 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:40.577 09:16:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:40.577 09:16:55 -- common/autotest_common.sh@10 -- # set +x 00:32:40.577 ************************************ 00:32:40.577 START TEST nvmf_identify_passthru 00:32:40.577 ************************************ 00:32:40.577 09:16:55 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:32:40.577 * Looking for test storage... 00:32:40.577 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:40.577 09:16:55 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:40.577 09:16:55 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:32:40.577 09:16:55 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:40.577 09:16:55 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:40.577 09:16:55 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:40.577 09:16:55 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:40.577 09:16:55 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:40.577 09:16:55 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:32:40.577 09:16:55 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:32:40.577 09:16:55 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:32:40.577 09:16:56 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:32:40.577 09:16:56 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:32:40.577 09:16:56 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:32:40.577 09:16:56 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:32:40.577 09:16:56 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:40.577 09:16:56 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:32:40.577 09:16:56 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:32:40.577 09:16:56 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:40.577 09:16:56 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:40.577 09:16:56 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:32:40.577 09:16:56 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:32:40.577 09:16:56 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:40.577 09:16:56 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:32:40.577 09:16:56 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:32:40.577 09:16:56 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:32:40.577 09:16:56 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:32:40.577 09:16:56 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:40.577 09:16:56 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:32:40.577 09:16:56 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:32:40.577 09:16:56 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:40.577 09:16:56 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:40.577 09:16:56 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:32:40.577 09:16:56 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:40.577 09:16:56 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:40.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.577 --rc genhtml_branch_coverage=1 00:32:40.577 --rc genhtml_function_coverage=1 00:32:40.577 --rc genhtml_legend=1 00:32:40.577 --rc geninfo_all_blocks=1 00:32:40.577 --rc geninfo_unexecuted_blocks=1 00:32:40.577 00:32:40.577 ' 00:32:40.577 09:16:56 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:40.577 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.577 --rc genhtml_branch_coverage=1 00:32:40.577 --rc genhtml_function_coverage=1 00:32:40.577 --rc genhtml_legend=1 00:32:40.577 --rc geninfo_all_blocks=1 00:32:40.577 --rc geninfo_unexecuted_blocks=1 00:32:40.577 00:32:40.577 ' 00:32:40.577 09:16:56 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:40.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.578 --rc genhtml_branch_coverage=1 00:32:40.578 --rc genhtml_function_coverage=1 00:32:40.578 --rc genhtml_legend=1 00:32:40.578 --rc geninfo_all_blocks=1 00:32:40.578 --rc geninfo_unexecuted_blocks=1 00:32:40.578 00:32:40.578 ' 00:32:40.578 09:16:56 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:40.578 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:40.578 --rc genhtml_branch_coverage=1 00:32:40.578 --rc genhtml_function_coverage=1 00:32:40.578 --rc genhtml_legend=1 00:32:40.578 --rc geninfo_all_blocks=1 00:32:40.578 --rc geninfo_unexecuted_blocks=1 00:32:40.578 00:32:40.578 ' 00:32:40.578 09:16:56 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:40.578 09:16:56 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:40.578 09:16:56 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:40.578 09:16:56 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:40.578 09:16:56 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:40.578 09:16:56 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.578 09:16:56 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.578 09:16:56 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.578 09:16:56 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:40.578 09:16:56 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@50 -- # : 0 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:32:40.578 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@54 -- # have_pci_nics=0 00:32:40.578 09:16:56 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:40.578 09:16:56 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:32:40.578 09:16:56 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:40.578 09:16:56 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:40.578 09:16:56 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:40.578 09:16:56 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.578 09:16:56 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.578 09:16:56 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.578 09:16:56 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:32:40.578 09:16:56 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:40.578 09:16:56 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@296 -- # prepare_net_devs 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@258 -- # local -g is_hw=no 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@260 -- # remove_target_ns 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:40.578 09:16:56 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:32:40.578 09:16:56 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:32:40.578 09:16:56 nvmf_identify_passthru -- nvmf/common.sh@125 -- # xtrace_disable 00:32:40.578 09:16:56 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@131 -- # pci_devs=() 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@131 -- # local -a pci_devs 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@132 -- # pci_net_devs=() 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@133 -- # pci_drivers=() 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@133 -- # local -A pci_drivers 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@135 -- # net_devs=() 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@135 -- # local -ga net_devs 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@136 -- # e810=() 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@136 -- # local -ga e810 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@137 -- # x722=() 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@137 -- # local -ga x722 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@138 -- # mlx=() 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@138 -- # local -ga mlx 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:45.851 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:45.851 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:45.851 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:45.852 Found net devices under 0000:86:00.0: cvl_0_0 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:45.852 Found net devices under 0000:86:00.1: cvl_0_1 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@262 -- # is_hw=yes 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@257 -- # create_target_ns 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@27 -- # local -gA dev_map 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@28 -- # local -g _dev 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@44 -- # ips=() 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@11 -- # local val=167772161 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:32:45.852 10.0.0.1 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@11 -- # local val=167772162 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:32:45.852 10.0.0.2 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:32:45.852 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@38 -- # ping_ips 1 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@107 -- # local dev=initiator0 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:32:46.112 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:46.112 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.224 ms 00:32:46.112 00:32:46.112 --- 10.0.0.1 ping statistics --- 00:32:46.112 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.112 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # get_net_dev target0 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@107 -- # local dev=target0 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:32:46.112 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:32:46.112 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:46.112 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.077 ms 00:32:46.112 00:32:46.112 --- 10.0.0.2 ping statistics --- 00:32:46.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:46.113 rtt min/avg/max/mdev = 0.077/0.077/0.077/0.000 ms 00:32:46.113 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # (( pair++ )) 00:32:46.113 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:32:46.113 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:46.113 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@270 -- # return 0 00:32:46.113 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:32:46.113 09:17:01 nvmf_identify_passthru -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:32:46.113 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:32:46.113 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:32:46.113 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:32:46.113 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:32:46.113 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:32:46.113 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:32:46.113 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:46.113 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:32:46.113 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@107 -- # local dev=initiator0 00:32:46.113 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:32:46.113 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:32:46.113 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:32:46.113 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:32:46.113 09:17:01 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@107 -- # local dev=initiator1 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # return 1 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # dev= 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@169 -- # return 0 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # get_net_dev target0 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@107 -- # local dev=target0 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # get_net_dev target1 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@107 -- # local dev=target1 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # return 1 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # dev= 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@169 -- # return 0 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:32:46.113 09:17:02 nvmf_identify_passthru -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:32:46.113 09:17:02 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:32:46.113 09:17:02 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:46.113 09:17:02 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:46.113 09:17:02 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:32:46.113 09:17:02 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:32:46.113 09:17:02 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:32:46.113 09:17:02 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:32:46.113 09:17:02 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:32:46.113 09:17:02 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:32:46.113 09:17:02 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:32:46.113 09:17:02 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:32:46.113 09:17:02 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:32:46.113 09:17:02 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:32:46.372 09:17:02 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:32:46.372 09:17:02 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:32:46.372 09:17:02 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:32:46.372 09:17:02 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:32:46.372 09:17:02 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:32:46.372 09:17:02 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:32:46.372 09:17:02 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:46.372 09:17:02 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:32:50.561 09:17:06 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:32:50.561 09:17:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:32:50.561 09:17:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:32:50.561 09:17:06 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:32:54.746 09:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:32:54.746 09:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:32:54.746 09:17:10 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:54.746 09:17:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:54.746 09:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:32:54.746 09:17:10 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:54.746 09:17:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:54.746 09:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2588534 00:32:54.746 09:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:32:54.747 09:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:32:54.747 09:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2588534 00:32:54.747 09:17:10 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 2588534 ']' 00:32:54.747 09:17:10 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:54.747 09:17:10 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:54.747 09:17:10 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:54.747 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:54.747 09:17:10 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:54.747 09:17:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:54.747 [2024-11-20 09:17:10.579815] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:32:54.747 [2024-11-20 09:17:10.579866] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:54.747 [2024-11-20 09:17:10.663932] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:54.747 [2024-11-20 09:17:10.707185] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:54.747 [2024-11-20 09:17:10.707223] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:54.747 [2024-11-20 09:17:10.707230] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:54.747 [2024-11-20 09:17:10.707236] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:54.747 [2024-11-20 09:17:10.707241] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:54.747 [2024-11-20 09:17:10.708811] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:54.747 [2024-11-20 09:17:10.708914] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:54.747 [2024-11-20 09:17:10.708915] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:54.747 [2024-11-20 09:17:10.708830] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:54.747 09:17:10 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:54.747 09:17:10 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:32:54.747 09:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:32:54.747 09:17:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.747 09:17:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:54.747 INFO: Log level set to 20 00:32:54.747 INFO: Requests: 00:32:54.747 { 00:32:54.747 "jsonrpc": "2.0", 00:32:54.747 "method": "nvmf_set_config", 00:32:54.747 "id": 1, 00:32:54.747 "params": { 00:32:54.747 "admin_cmd_passthru": { 00:32:54.747 "identify_ctrlr": true 00:32:54.747 } 00:32:54.747 } 00:32:54.747 } 00:32:54.747 00:32:54.747 INFO: response: 00:32:54.747 { 00:32:54.747 "jsonrpc": "2.0", 00:32:54.747 "id": 1, 00:32:54.747 "result": true 00:32:54.747 } 00:32:54.747 00:32:54.747 09:17:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:54.747 09:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:32:54.747 09:17:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:54.747 09:17:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:54.747 INFO: Setting log level to 20 00:32:54.747 INFO: Setting log level to 20 00:32:54.747 INFO: Log level set to 20 00:32:54.747 INFO: Log level set to 20 00:32:54.747 INFO: Requests: 00:32:54.747 { 00:32:54.747 "jsonrpc": "2.0", 00:32:54.747 "method": "framework_start_init", 00:32:54.747 "id": 1 00:32:54.747 } 00:32:54.747 00:32:54.747 INFO: Requests: 00:32:54.747 { 00:32:54.747 "jsonrpc": "2.0", 00:32:54.747 "method": "framework_start_init", 00:32:54.747 "id": 1 00:32:54.747 } 00:32:54.747 00:32:55.004 [2024-11-20 09:17:10.832250] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:32:55.004 INFO: response: 00:32:55.004 { 00:32:55.004 "jsonrpc": "2.0", 00:32:55.004 "id": 1, 00:32:55.004 "result": true 00:32:55.004 } 00:32:55.004 00:32:55.004 INFO: response: 00:32:55.004 { 00:32:55.004 "jsonrpc": "2.0", 00:32:55.004 "id": 1, 00:32:55.004 "result": true 00:32:55.004 } 00:32:55.004 00:32:55.004 09:17:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.004 09:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:55.004 09:17:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.004 09:17:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:55.004 INFO: Setting log level to 40 00:32:55.004 INFO: Setting log level to 40 00:32:55.004 INFO: Setting log level to 40 00:32:55.004 [2024-11-20 09:17:10.841575] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:55.004 09:17:10 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:55.004 09:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:32:55.004 09:17:10 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:55.004 09:17:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:55.004 09:17:10 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:32:55.004 09:17:10 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:55.004 09:17:10 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:58.279 Nvme0n1 00:32:58.279 09:17:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.279 09:17:13 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:32:58.279 09:17:13 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.279 09:17:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:58.279 09:17:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.279 09:17:13 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:32:58.279 09:17:13 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.279 09:17:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:58.279 09:17:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.279 09:17:13 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:58.279 09:17:13 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.279 09:17:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:58.279 [2024-11-20 09:17:13.748179] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:58.279 09:17:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.279 09:17:13 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:32:58.279 09:17:13 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.279 09:17:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:58.279 [ 00:32:58.279 { 00:32:58.279 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:32:58.279 "subtype": "Discovery", 00:32:58.279 "listen_addresses": [], 00:32:58.279 "allow_any_host": true, 00:32:58.279 "hosts": [] 00:32:58.279 }, 00:32:58.279 { 00:32:58.279 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:32:58.279 "subtype": "NVMe", 00:32:58.279 "listen_addresses": [ 00:32:58.279 { 00:32:58.279 "trtype": "TCP", 00:32:58.279 "adrfam": "IPv4", 00:32:58.279 "traddr": "10.0.0.2", 00:32:58.279 "trsvcid": "4420" 00:32:58.279 } 00:32:58.279 ], 00:32:58.279 "allow_any_host": true, 00:32:58.279 "hosts": [], 00:32:58.279 "serial_number": "SPDK00000000000001", 00:32:58.280 "model_number": "SPDK bdev Controller", 00:32:58.280 "max_namespaces": 1, 00:32:58.280 "min_cntlid": 1, 00:32:58.280 "max_cntlid": 65519, 00:32:58.280 "namespaces": [ 00:32:58.280 { 00:32:58.280 "nsid": 1, 00:32:58.280 "bdev_name": "Nvme0n1", 00:32:58.280 "name": "Nvme0n1", 00:32:58.280 "nguid": "48932800476B40199915EACE2B78B951", 00:32:58.280 "uuid": "48932800-476b-4019-9915-eace2b78b951" 00:32:58.280 } 00:32:58.280 ] 00:32:58.280 } 00:32:58.280 ] 00:32:58.280 09:17:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.280 09:17:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:58.280 09:17:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:32:58.280 09:17:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:32:58.280 09:17:13 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:32:58.280 09:17:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:58.280 09:17:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:32:58.280 09:17:13 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:32:58.280 09:17:14 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:32:58.280 09:17:14 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:32:58.280 09:17:14 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:32:58.280 09:17:14 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:58.280 09:17:14 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:58.280 09:17:14 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:32:58.280 09:17:14 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:58.280 09:17:14 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:32:58.280 09:17:14 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:32:58.280 09:17:14 nvmf_identify_passthru -- nvmf/common.sh@335 -- # nvmfcleanup 00:32:58.280 09:17:14 nvmf_identify_passthru -- nvmf/common.sh@99 -- # sync 00:32:58.280 09:17:14 nvmf_identify_passthru -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:32:58.280 09:17:14 nvmf_identify_passthru -- nvmf/common.sh@102 -- # set +e 00:32:58.280 09:17:14 nvmf_identify_passthru -- nvmf/common.sh@103 -- # for i in {1..20} 00:32:58.280 09:17:14 nvmf_identify_passthru -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:32:58.280 rmmod nvme_tcp 00:32:58.280 rmmod nvme_fabrics 00:32:58.280 rmmod nvme_keyring 00:32:58.280 09:17:14 nvmf_identify_passthru -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:32:58.280 09:17:14 nvmf_identify_passthru -- nvmf/common.sh@106 -- # set -e 00:32:58.280 09:17:14 nvmf_identify_passthru -- nvmf/common.sh@107 -- # return 0 00:32:58.280 09:17:14 nvmf_identify_passthru -- nvmf/common.sh@336 -- # '[' -n 2588534 ']' 00:32:58.280 09:17:14 nvmf_identify_passthru -- nvmf/common.sh@337 -- # killprocess 2588534 00:32:58.280 09:17:14 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 2588534 ']' 00:32:58.280 09:17:14 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 2588534 00:32:58.280 09:17:14 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:32:58.280 09:17:14 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:58.280 09:17:14 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2588534 00:32:58.280 09:17:14 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:58.280 09:17:14 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:58.280 09:17:14 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2588534' 00:32:58.280 killing process with pid 2588534 00:32:58.280 09:17:14 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 2588534 00:32:58.280 09:17:14 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 2588534 00:32:59.654 09:17:15 nvmf_identify_passthru -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:32:59.654 09:17:15 nvmf_identify_passthru -- nvmf/common.sh@342 -- # nvmf_fini 00:32:59.654 09:17:15 nvmf_identify_passthru -- nvmf/setup.sh@264 -- # local dev 00:32:59.654 09:17:15 nvmf_identify_passthru -- nvmf/setup.sh@267 -- # remove_target_ns 00:32:59.654 09:17:15 nvmf_identify_passthru -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:59.916 09:17:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:32:59.916 09:17:15 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:01.961 09:17:17 nvmf_identify_passthru -- nvmf/setup.sh@268 -- # delete_main_bridge 00:33:01.961 09:17:17 nvmf_identify_passthru -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:01.961 09:17:17 nvmf_identify_passthru -- nvmf/setup.sh@130 -- # return 0 00:33:01.961 09:17:17 nvmf_identify_passthru -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:33:01.961 09:17:17 nvmf_identify_passthru -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:33:01.961 09:17:17 nvmf_identify_passthru -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:33:01.961 09:17:17 nvmf_identify_passthru -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:33:01.961 09:17:17 nvmf_identify_passthru -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:33:01.961 09:17:17 nvmf_identify_passthru -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:33:01.961 09:17:17 nvmf_identify_passthru -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:33:01.961 09:17:17 nvmf_identify_passthru -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:33:01.962 09:17:17 nvmf_identify_passthru -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:33:01.962 09:17:17 nvmf_identify_passthru -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:33:01.962 09:17:17 nvmf_identify_passthru -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:33:01.962 09:17:17 nvmf_identify_passthru -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:33:01.962 09:17:17 nvmf_identify_passthru -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:33:01.962 09:17:17 nvmf_identify_passthru -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:33:01.962 09:17:17 nvmf_identify_passthru -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:33:01.962 09:17:17 nvmf_identify_passthru -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:33:01.962 09:17:17 nvmf_identify_passthru -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:33:01.962 09:17:17 nvmf_identify_passthru -- nvmf/setup.sh@41 -- # _dev=0 00:33:01.962 09:17:17 nvmf_identify_passthru -- nvmf/setup.sh@41 -- # dev_map=() 00:33:01.962 09:17:17 nvmf_identify_passthru -- nvmf/setup.sh@284 -- # iptr 00:33:01.962 09:17:17 nvmf_identify_passthru -- nvmf/common.sh@542 -- # iptables-save 00:33:01.962 09:17:17 nvmf_identify_passthru -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:33:01.962 09:17:17 nvmf_identify_passthru -- nvmf/common.sh@542 -- # iptables-restore 00:33:01.962 00:33:01.962 real 0m21.933s 00:33:01.962 user 0m26.697s 00:33:01.962 sys 0m6.260s 00:33:01.962 09:17:17 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:01.962 09:17:17 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:01.962 ************************************ 00:33:01.962 END TEST nvmf_identify_passthru 00:33:01.962 ************************************ 00:33:01.962 09:17:17 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:01.962 09:17:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:01.962 09:17:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:01.962 09:17:17 -- common/autotest_common.sh@10 -- # set +x 00:33:01.962 ************************************ 00:33:01.962 START TEST nvmf_dif 00:33:01.962 ************************************ 00:33:01.962 09:17:17 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:01.962 * Looking for test storage... 00:33:01.962 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:01.962 09:17:17 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:01.962 09:17:17 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:33:01.962 09:17:17 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:02.221 09:17:17 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:02.221 09:17:17 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:02.221 09:17:18 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:02.221 09:17:18 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:02.221 09:17:18 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:33:02.221 09:17:18 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:33:02.221 09:17:18 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:33:02.221 09:17:18 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:33:02.221 09:17:18 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:33:02.221 09:17:18 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:33:02.221 09:17:18 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:33:02.221 09:17:18 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:02.221 09:17:18 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:33:02.221 09:17:18 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:33:02.221 09:17:18 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:02.221 09:17:18 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:02.221 09:17:18 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:33:02.221 09:17:18 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:33:02.221 09:17:18 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:02.221 09:17:18 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:33:02.221 09:17:18 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:33:02.221 09:17:18 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:33:02.221 09:17:18 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:33:02.221 09:17:18 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:02.221 09:17:18 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:33:02.221 09:17:18 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:33:02.221 09:17:18 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:02.221 09:17:18 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:02.221 09:17:18 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:33:02.221 09:17:18 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:02.221 09:17:18 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:02.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.221 --rc genhtml_branch_coverage=1 00:33:02.221 --rc genhtml_function_coverage=1 00:33:02.221 --rc genhtml_legend=1 00:33:02.221 --rc geninfo_all_blocks=1 00:33:02.221 --rc geninfo_unexecuted_blocks=1 00:33:02.221 00:33:02.221 ' 00:33:02.221 09:17:18 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:02.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.221 --rc genhtml_branch_coverage=1 00:33:02.221 --rc genhtml_function_coverage=1 00:33:02.221 --rc genhtml_legend=1 00:33:02.221 --rc geninfo_all_blocks=1 00:33:02.221 --rc geninfo_unexecuted_blocks=1 00:33:02.221 00:33:02.221 ' 00:33:02.221 09:17:18 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:02.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.221 --rc genhtml_branch_coverage=1 00:33:02.221 --rc genhtml_function_coverage=1 00:33:02.221 --rc genhtml_legend=1 00:33:02.221 --rc geninfo_all_blocks=1 00:33:02.221 --rc geninfo_unexecuted_blocks=1 00:33:02.221 00:33:02.221 ' 00:33:02.221 09:17:18 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:02.221 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:02.221 --rc genhtml_branch_coverage=1 00:33:02.221 --rc genhtml_function_coverage=1 00:33:02.221 --rc genhtml_legend=1 00:33:02.221 --rc geninfo_all_blocks=1 00:33:02.221 --rc geninfo_unexecuted_blocks=1 00:33:02.221 00:33:02.221 ' 00:33:02.221 09:17:18 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:02.221 09:17:18 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:02.221 09:17:18 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:02.221 09:17:18 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:02.221 09:17:18 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:02.221 09:17:18 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:02.221 09:17:18 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:02.221 09:17:18 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:33:02.221 09:17:18 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:02.221 09:17:18 nvmf_dif -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:33:02.221 09:17:18 nvmf_dif -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:33:02.221 09:17:18 nvmf_dif -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:33:02.221 09:17:18 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:02.221 09:17:18 nvmf_dif -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:33:02.221 09:17:18 nvmf_dif -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:33:02.221 09:17:18 nvmf_dif -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:02.221 09:17:18 nvmf_dif -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:02.221 09:17:18 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:33:02.221 09:17:18 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:02.221 09:17:18 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:02.221 09:17:18 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:02.221 09:17:18 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.222 09:17:18 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.222 09:17:18 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.222 09:17:18 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:02.222 09:17:18 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:02.222 09:17:18 nvmf_dif -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:33:02.222 09:17:18 nvmf_dif -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:33:02.222 09:17:18 nvmf_dif -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:02.222 09:17:18 nvmf_dif -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:33:02.222 09:17:18 nvmf_dif -- nvmf/common.sh@50 -- # : 0 00:33:02.222 09:17:18 nvmf_dif -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:33:02.222 09:17:18 nvmf_dif -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:33:02.222 09:17:18 nvmf_dif -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:33:02.222 09:17:18 nvmf_dif -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:02.222 09:17:18 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:02.222 09:17:18 nvmf_dif -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:33:02.222 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:33:02.222 09:17:18 nvmf_dif -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:33:02.222 09:17:18 nvmf_dif -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:33:02.222 09:17:18 nvmf_dif -- nvmf/common.sh@54 -- # have_pci_nics=0 00:33:02.222 09:17:18 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:02.222 09:17:18 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:02.222 09:17:18 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:02.222 09:17:18 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:02.222 09:17:18 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:02.222 09:17:18 nvmf_dif -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:33:02.222 09:17:18 nvmf_dif -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:02.222 09:17:18 nvmf_dif -- nvmf/common.sh@296 -- # prepare_net_devs 00:33:02.222 09:17:18 nvmf_dif -- nvmf/common.sh@258 -- # local -g is_hw=no 00:33:02.222 09:17:18 nvmf_dif -- nvmf/common.sh@260 -- # remove_target_ns 00:33:02.222 09:17:18 nvmf_dif -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:02.222 09:17:18 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:33:02.222 09:17:18 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:02.222 09:17:18 nvmf_dif -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:33:02.222 09:17:18 nvmf_dif -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:33:02.222 09:17:18 nvmf_dif -- nvmf/common.sh@125 -- # xtrace_disable 00:33:02.222 09:17:18 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@131 -- # pci_devs=() 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@131 -- # local -a pci_devs 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@132 -- # pci_net_devs=() 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@133 -- # pci_drivers=() 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@133 -- # local -A pci_drivers 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@135 -- # net_devs=() 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@135 -- # local -ga net_devs 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@136 -- # e810=() 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@136 -- # local -ga e810 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@137 -- # x722=() 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@137 -- # local -ga x722 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@138 -- # mlx=() 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@138 -- # local -ga mlx 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:08.792 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:08.792 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:33:08.792 09:17:23 nvmf_dif -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:08.793 09:17:23 nvmf_dif -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:08.793 09:17:23 nvmf_dif -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:08.793 09:17:23 nvmf_dif -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:08.793 09:17:23 nvmf_dif -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:08.793 09:17:23 nvmf_dif -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:08.793 09:17:23 nvmf_dif -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:08.793 09:17:23 nvmf_dif -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:08.793 Found net devices under 0000:86:00.0: cvl_0_0 00:33:08.793 09:17:23 nvmf_dif -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:08.793 09:17:23 nvmf_dif -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:08.793 09:17:23 nvmf_dif -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:08.793 09:17:23 nvmf_dif -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:08.793 09:17:23 nvmf_dif -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:08.793 09:17:23 nvmf_dif -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:08.793 09:17:23 nvmf_dif -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:08.793 09:17:23 nvmf_dif -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:08.793 09:17:23 nvmf_dif -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:08.793 Found net devices under 0000:86:00.1: cvl_0_1 00:33:08.793 09:17:23 nvmf_dif -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:08.793 09:17:23 nvmf_dif -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:33:08.793 09:17:23 nvmf_dif -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:33:08.793 09:17:23 nvmf_dif -- nvmf/common.sh@262 -- # is_hw=yes 00:33:08.793 09:17:23 nvmf_dif -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:33:08.793 09:17:23 nvmf_dif -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:33:08.793 09:17:23 nvmf_dif -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@257 -- # create_target_ns 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@27 -- # local -gA dev_map 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@28 -- # local -g _dev 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@44 -- # ips=() 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772161 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:33:08.793 10.0.0.1 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772162 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:33:08.793 10.0.0.2 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:33:08.793 09:17:23 nvmf_dif -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@38 -- # ping_ips 1 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@107 -- # local dev=initiator0 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:33:08.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:08.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.400 ms 00:33:08.793 00:33:08.793 --- 10.0.0.1 ping statistics --- 00:33:08.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:08.793 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@168 -- # get_net_dev target0 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@107 -- # local dev=target0 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:33:08.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:08.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.221 ms 00:33:08.793 00:33:08.793 --- 10.0.0.2 ping statistics --- 00:33:08.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:08.793 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@98 -- # (( pair++ )) 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:33:08.793 09:17:23 nvmf_dif -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:08.793 09:17:23 nvmf_dif -- nvmf/common.sh@270 -- # return 0 00:33:08.793 09:17:23 nvmf_dif -- nvmf/common.sh@298 -- # '[' iso == iso ']' 00:33:08.793 09:17:23 nvmf_dif -- nvmf/common.sh@299 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:10.697 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:10.697 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:10.697 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:10.697 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:10.697 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:10.697 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:10.697 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:10.697 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:10.697 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:10.697 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:10.697 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:10.697 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:10.697 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:10.697 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:10.697 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:10.697 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:10.697 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:10.955 09:17:26 nvmf_dif -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@107 -- # local dev=initiator0 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@107 -- # local dev=initiator1 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@109 -- # return 1 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@168 -- # dev= 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@169 -- # return 0 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@168 -- # get_net_dev target0 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@107 -- # local dev=target0 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@168 -- # get_net_dev target1 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@107 -- # local dev=target1 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@109 -- # return 1 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@168 -- # dev= 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@169 -- # return 0 00:33:10.955 09:17:26 nvmf_dif -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:33:10.955 09:17:26 nvmf_dif -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:10.955 09:17:26 nvmf_dif -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:33:10.955 09:17:26 nvmf_dif -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:33:10.955 09:17:26 nvmf_dif -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:10.955 09:17:26 nvmf_dif -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:33:10.955 09:17:26 nvmf_dif -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:33:10.955 09:17:26 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:10.955 09:17:26 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:10.955 09:17:26 nvmf_dif -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:33:10.955 09:17:26 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:10.955 09:17:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:10.955 09:17:26 nvmf_dif -- nvmf/common.sh@328 -- # nvmfpid=2594166 00:33:10.955 09:17:26 nvmf_dif -- nvmf/common.sh@329 -- # waitforlisten 2594166 00:33:10.955 09:17:26 nvmf_dif -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:10.955 09:17:26 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 2594166 ']' 00:33:10.955 09:17:26 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:10.955 09:17:26 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:10.955 09:17:26 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:10.955 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:10.955 09:17:26 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:10.955 09:17:26 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:10.955 [2024-11-20 09:17:26.975858] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:33:10.955 [2024-11-20 09:17:26.975899] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:11.213 [2024-11-20 09:17:27.042989] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.213 [2024-11-20 09:17:27.085228] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:11.213 [2024-11-20 09:17:27.085265] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:11.213 [2024-11-20 09:17:27.085272] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:11.213 [2024-11-20 09:17:27.085279] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:11.213 [2024-11-20 09:17:27.085284] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:11.213 [2024-11-20 09:17:27.085860] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.213 09:17:27 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:11.213 09:17:27 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:33:11.213 09:17:27 nvmf_dif -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:33:11.213 09:17:27 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:11.213 09:17:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:11.213 09:17:27 nvmf_dif -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:11.213 09:17:27 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:11.213 09:17:27 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:11.213 09:17:27 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.213 09:17:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:11.213 [2024-11-20 09:17:27.220651] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:11.213 09:17:27 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.213 09:17:27 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:11.213 09:17:27 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:11.213 09:17:27 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:11.213 09:17:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:11.472 ************************************ 00:33:11.472 START TEST fio_dif_1_default 00:33:11.472 ************************************ 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:11.472 bdev_null0 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:11.472 [2024-11-20 09:17:27.292990] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@372 -- # config=() 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@372 -- # local subsystem config 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:33:11.472 { 00:33:11.472 "params": { 00:33:11.472 "name": "Nvme$subsystem", 00:33:11.472 "trtype": "$TEST_TRANSPORT", 00:33:11.472 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:11.472 "adrfam": "ipv4", 00:33:11.472 "trsvcid": "$NVMF_PORT", 00:33:11.472 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:11.472 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:11.472 "hdgst": ${hdgst:-false}, 00:33:11.472 "ddgst": ${ddgst:-false} 00:33:11.472 }, 00:33:11.472 "method": "bdev_nvme_attach_controller" 00:33:11.472 } 00:33:11.472 EOF 00:33:11.472 )") 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@394 -- # cat 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@396 -- # jq . 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@397 -- # IFS=, 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:33:11.472 "params": { 00:33:11.472 "name": "Nvme0", 00:33:11.472 "trtype": "tcp", 00:33:11.472 "traddr": "10.0.0.2", 00:33:11.472 "adrfam": "ipv4", 00:33:11.472 "trsvcid": "4420", 00:33:11.472 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:11.472 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:11.472 "hdgst": false, 00:33:11.472 "ddgst": false 00:33:11.472 }, 00:33:11.472 "method": "bdev_nvme_attach_controller" 00:33:11.472 }' 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:11.472 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:11.473 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:11.473 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:11.473 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:11.473 09:17:27 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:11.731 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:11.731 fio-3.35 00:33:11.731 Starting 1 thread 00:33:23.934 00:33:23.934 filename0: (groupid=0, jobs=1): err= 0: pid=2594450: Wed Nov 20 09:17:38 2024 00:33:23.934 read: IOPS=206, BW=824KiB/s (844kB/s)(8272KiB/10033msec) 00:33:23.934 slat (nsec): min=5860, max=36251, avg=6300.87, stdev=1131.30 00:33:23.934 clat (usec): min=378, max=44231, avg=19387.15, stdev=20447.16 00:33:23.934 lat (usec): min=384, max=44266, avg=19393.46, stdev=20447.11 00:33:23.934 clat percentiles (usec): 00:33:23.934 | 1.00th=[ 396], 5.00th=[ 412], 10.00th=[ 437], 20.00th=[ 457], 00:33:23.934 | 30.00th=[ 486], 40.00th=[ 603], 50.00th=[ 619], 60.00th=[41157], 00:33:23.934 | 70.00th=[41157], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:33:23.934 | 99.00th=[42730], 99.50th=[42730], 99.90th=[44303], 99.95th=[44303], 00:33:23.934 | 99.99th=[44303] 00:33:23.934 bw ( KiB/s): min= 704, max= 1024, per=100.00%, avg=825.60, stdev=85.36, samples=20 00:33:23.934 iops : min= 176, max= 256, avg=206.40, stdev=21.34, samples=20 00:33:23.934 lat (usec) : 500=32.35%, 750=21.62% 00:33:23.934 lat (msec) : 50=46.03% 00:33:23.934 cpu : usr=92.71%, sys=7.03%, ctx=15, majf=0, minf=0 00:33:23.934 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:23.934 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.934 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:23.934 issued rwts: total=2068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:23.934 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:23.934 00:33:23.934 Run status group 0 (all jobs): 00:33:23.934 READ: bw=824KiB/s (844kB/s), 824KiB/s-824KiB/s (844kB/s-844kB/s), io=8272KiB (8471kB), run=10033-10033msec 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.934 00:33:23.934 real 0m11.243s 00:33:23.934 user 0m15.912s 00:33:23.934 sys 0m1.008s 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:23.934 ************************************ 00:33:23.934 END TEST fio_dif_1_default 00:33:23.934 ************************************ 00:33:23.934 09:17:38 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:33:23.934 09:17:38 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:23.934 09:17:38 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:23.934 09:17:38 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:23.934 ************************************ 00:33:23.934 START TEST fio_dif_1_multi_subsystems 00:33:23.934 ************************************ 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.934 bdev_null0 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.934 [2024-11-20 09:17:38.609284] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.934 bdev_null1 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@372 -- # config=() 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@372 -- # local subsystem config 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:33:23.934 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:33:23.934 { 00:33:23.934 "params": { 00:33:23.934 "name": "Nvme$subsystem", 00:33:23.934 "trtype": "$TEST_TRANSPORT", 00:33:23.934 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:23.934 "adrfam": "ipv4", 00:33:23.934 "trsvcid": "$NVMF_PORT", 00:33:23.934 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:23.934 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:23.934 "hdgst": ${hdgst:-false}, 00:33:23.934 "ddgst": ${ddgst:-false} 00:33:23.934 }, 00:33:23.934 "method": "bdev_nvme_attach_controller" 00:33:23.934 } 00:33:23.934 EOF 00:33:23.935 )") 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # cat 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:33:23.935 { 00:33:23.935 "params": { 00:33:23.935 "name": "Nvme$subsystem", 00:33:23.935 "trtype": "$TEST_TRANSPORT", 00:33:23.935 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:23.935 "adrfam": "ipv4", 00:33:23.935 "trsvcid": "$NVMF_PORT", 00:33:23.935 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:23.935 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:23.935 "hdgst": ${hdgst:-false}, 00:33:23.935 "ddgst": ${ddgst:-false} 00:33:23.935 }, 00:33:23.935 "method": "bdev_nvme_attach_controller" 00:33:23.935 } 00:33:23.935 EOF 00:33:23.935 )") 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # cat 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@396 -- # jq . 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@397 -- # IFS=, 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:33:23.935 "params": { 00:33:23.935 "name": "Nvme0", 00:33:23.935 "trtype": "tcp", 00:33:23.935 "traddr": "10.0.0.2", 00:33:23.935 "adrfam": "ipv4", 00:33:23.935 "trsvcid": "4420", 00:33:23.935 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:23.935 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:23.935 "hdgst": false, 00:33:23.935 "ddgst": false 00:33:23.935 }, 00:33:23.935 "method": "bdev_nvme_attach_controller" 00:33:23.935 },{ 00:33:23.935 "params": { 00:33:23.935 "name": "Nvme1", 00:33:23.935 "trtype": "tcp", 00:33:23.935 "traddr": "10.0.0.2", 00:33:23.935 "adrfam": "ipv4", 00:33:23.935 "trsvcid": "4420", 00:33:23.935 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:23.935 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:23.935 "hdgst": false, 00:33:23.935 "ddgst": false 00:33:23.935 }, 00:33:23.935 "method": "bdev_nvme_attach_controller" 00:33:23.935 }' 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:23.935 09:17:38 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:23.935 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:23.935 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:23.935 fio-3.35 00:33:23.935 Starting 2 threads 00:33:33.903 00:33:33.903 filename0: (groupid=0, jobs=1): err= 0: pid=2596373: Wed Nov 20 09:17:49 2024 00:33:33.903 read: IOPS=96, BW=387KiB/s (396kB/s)(3872KiB/10001msec) 00:33:33.903 slat (nsec): min=6165, max=64386, avg=11729.43, stdev=8456.16 00:33:33.903 clat (usec): min=40786, max=42961, avg=41288.58, stdev=477.67 00:33:33.903 lat (usec): min=40793, max=42972, avg=41300.31, stdev=477.76 00:33:33.903 clat percentiles (usec): 00:33:33.903 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:33:33.903 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:33:33.903 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:33:33.903 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:33:33.903 | 99.99th=[42730] 00:33:33.903 bw ( KiB/s): min= 352, max= 416, per=31.38%, avg=387.37, stdev=14.68, samples=19 00:33:33.903 iops : min= 88, max= 104, avg=96.84, stdev= 3.67, samples=19 00:33:33.903 lat (msec) : 50=100.00% 00:33:33.903 cpu : usr=97.61%, sys=2.12%, ctx=10, majf=0, minf=113 00:33:33.903 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:33.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:33.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:33.903 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:33.903 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:33.903 filename1: (groupid=0, jobs=1): err= 0: pid=2596374: Wed Nov 20 09:17:49 2024 00:33:33.903 read: IOPS=211, BW=847KiB/s (867kB/s)(8480KiB/10017msec) 00:33:33.903 slat (nsec): min=6149, max=54916, avg=9085.10, stdev=5638.73 00:33:33.903 clat (usec): min=419, max=42573, avg=18872.89, stdev=20416.99 00:33:33.903 lat (usec): min=426, max=42580, avg=18881.98, stdev=20415.33 00:33:33.903 clat percentiles (usec): 00:33:33.903 | 1.00th=[ 437], 5.00th=[ 453], 10.00th=[ 469], 20.00th=[ 482], 00:33:33.903 | 30.00th=[ 490], 40.00th=[ 510], 50.00th=[ 644], 60.00th=[41157], 00:33:33.903 | 70.00th=[41681], 80.00th=[41681], 90.00th=[41681], 95.00th=[42206], 00:33:33.903 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:33:33.903 | 99.99th=[42730] 00:33:33.903 bw ( KiB/s): min= 672, max= 960, per=68.61%, avg=846.40, stdev=90.79, samples=20 00:33:33.903 iops : min= 168, max= 240, avg=211.60, stdev=22.70, samples=20 00:33:33.903 lat (usec) : 500=36.75%, 750=17.22%, 1000=1.32% 00:33:33.903 lat (msec) : 50=44.72% 00:33:33.903 cpu : usr=98.94%, sys=0.75%, ctx=32, majf=0, minf=131 00:33:33.903 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:33.903 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:33.903 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:33.903 issued rwts: total=2120,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:33.903 latency : target=0, window=0, percentile=100.00%, depth=4 00:33:33.903 00:33:33.903 Run status group 0 (all jobs): 00:33:33.903 READ: bw=1233KiB/s (1263kB/s), 387KiB/s-847KiB/s (396kB/s-867kB/s), io=12.1MiB (12.6MB), run=10001-10017msec 00:33:33.903 09:17:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:33:33.903 09:17:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:33:33.903 09:17:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:33.903 09:17:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:33.903 09:17:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:33:33.903 09:17:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:33.903 09:17:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:33.903 09:17:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:34.162 09:17:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.162 09:17:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:34.162 09:17:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.162 09:17:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:34.162 09:17:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.162 09:17:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:33:34.162 09:17:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:34.162 09:17:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:33:34.162 09:17:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:34.162 09:17:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.162 09:17:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:34.162 09:17:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.162 09:17:49 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:34.162 09:17:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.162 09:17:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:34.162 09:17:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.162 00:33:34.162 real 0m11.399s 00:33:34.162 user 0m26.463s 00:33:34.162 sys 0m0.631s 00:33:34.162 09:17:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:34.162 09:17:49 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:33:34.162 ************************************ 00:33:34.162 END TEST fio_dif_1_multi_subsystems 00:33:34.162 ************************************ 00:33:34.162 09:17:50 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:33:34.162 09:17:50 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:34.162 09:17:50 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:34.162 09:17:50 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:34.162 ************************************ 00:33:34.162 START TEST fio_dif_rand_params 00:33:34.162 ************************************ 00:33:34.162 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:33:34.162 09:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:33:34.162 09:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:33:34.162 09:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:33:34.162 09:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:33:34.162 09:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:33:34.162 09:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:33:34.162 09:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:33:34.162 09:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:33:34.162 09:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.163 bdev_null0 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:34.163 [2024-11-20 09:17:50.089496] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:33:34.163 { 00:33:34.163 "params": { 00:33:34.163 "name": "Nvme$subsystem", 00:33:34.163 "trtype": "$TEST_TRANSPORT", 00:33:34.163 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:34.163 "adrfam": "ipv4", 00:33:34.163 "trsvcid": "$NVMF_PORT", 00:33:34.163 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:34.163 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:34.163 "hdgst": ${hdgst:-false}, 00:33:34.163 "ddgst": ${ddgst:-false} 00:33:34.163 }, 00:33:34.163 "method": "bdev_nvme_attach_controller" 00:33:34.163 } 00:33:34.163 EOF 00:33:34.163 )") 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:33:34.163 "params": { 00:33:34.163 "name": "Nvme0", 00:33:34.163 "trtype": "tcp", 00:33:34.163 "traddr": "10.0.0.2", 00:33:34.163 "adrfam": "ipv4", 00:33:34.163 "trsvcid": "4420", 00:33:34.163 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:34.163 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:34.163 "hdgst": false, 00:33:34.163 "ddgst": false 00:33:34.163 }, 00:33:34.163 "method": "bdev_nvme_attach_controller" 00:33:34.163 }' 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:34.163 09:17:50 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:34.421 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:34.421 ... 00:33:34.422 fio-3.35 00:33:34.422 Starting 3 threads 00:33:40.987 00:33:40.987 filename0: (groupid=0, jobs=1): err= 0: pid=2598251: Wed Nov 20 09:17:55 2024 00:33:40.987 read: IOPS=320, BW=40.1MiB/s (42.1MB/s)(201MiB/5007msec) 00:33:40.987 slat (nsec): min=6252, max=33183, avg=10809.15, stdev=2023.14 00:33:40.987 clat (usec): min=4519, max=49543, avg=9333.22, stdev=5618.96 00:33:40.987 lat (usec): min=4526, max=49556, avg=9344.03, stdev=5618.86 00:33:40.987 clat percentiles (usec): 00:33:40.987 | 1.00th=[ 5669], 5.00th=[ 6456], 10.00th=[ 7111], 20.00th=[ 7767], 00:33:40.987 | 30.00th=[ 8094], 40.00th=[ 8356], 50.00th=[ 8586], 60.00th=[ 8979], 00:33:40.987 | 70.00th=[ 9110], 80.00th=[ 9503], 90.00th=[10028], 95.00th=[10290], 00:33:40.987 | 99.00th=[46924], 99.50th=[48497], 99.90th=[49021], 99.95th=[49546], 00:33:40.987 | 99.99th=[49546] 00:33:40.987 bw ( KiB/s): min=24832, max=47360, per=34.78%, avg=41088.00, stdev=6751.86, samples=10 00:33:40.987 iops : min= 194, max= 370, avg=321.00, stdev=52.75, samples=10 00:33:40.987 lat (msec) : 10=90.17%, 20=7.78%, 50=2.05% 00:33:40.987 cpu : usr=94.53%, sys=5.15%, ctx=23, majf=0, minf=0 00:33:40.987 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:40.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.987 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.987 issued rwts: total=1607,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:40.987 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:40.987 filename0: (groupid=0, jobs=1): err= 0: pid=2598252: Wed Nov 20 09:17:55 2024 00:33:40.987 read: IOPS=308, BW=38.5MiB/s (40.4MB/s)(193MiB/5005msec) 00:33:40.987 slat (nsec): min=6305, max=25991, avg=10766.04, stdev=1970.81 00:33:40.987 clat (usec): min=3202, max=51588, avg=9716.99, stdev=4606.70 00:33:40.987 lat (usec): min=3209, max=51595, avg=9727.76, stdev=4606.93 00:33:40.987 clat percentiles (usec): 00:33:40.987 | 1.00th=[ 3490], 5.00th=[ 3949], 10.00th=[ 6259], 20.00th=[ 8029], 00:33:40.987 | 30.00th=[ 8979], 40.00th=[ 9372], 50.00th=[ 9634], 60.00th=[10028], 00:33:40.987 | 70.00th=[10552], 80.00th=[11076], 90.00th=[11600], 95.00th=[12125], 00:33:40.987 | 99.00th=[43779], 99.50th=[49021], 99.90th=[51119], 99.95th=[51643], 00:33:40.987 | 99.99th=[51643] 00:33:40.987 bw ( KiB/s): min=32512, max=46592, per=33.39%, avg=39449.60, stdev=3657.31, samples=10 00:33:40.987 iops : min= 254, max= 364, avg=308.20, stdev=28.57, samples=10 00:33:40.987 lat (msec) : 4=5.31%, 10=53.01%, 20=40.51%, 50=0.97%, 100=0.19% 00:33:40.987 cpu : usr=94.04%, sys=5.68%, ctx=11, majf=0, minf=2 00:33:40.987 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:40.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.987 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.987 issued rwts: total=1543,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:40.987 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:40.987 filename0: (groupid=0, jobs=1): err= 0: pid=2598253: Wed Nov 20 09:17:55 2024 00:33:40.987 read: IOPS=298, BW=37.3MiB/s (39.1MB/s)(188MiB/5046msec) 00:33:40.987 slat (nsec): min=6160, max=25071, avg=10686.57, stdev=1989.68 00:33:40.987 clat (usec): min=3556, max=51333, avg=10003.43, stdev=6081.07 00:33:40.987 lat (usec): min=3563, max=51345, avg=10014.12, stdev=6081.06 00:33:40.987 clat percentiles (usec): 00:33:40.987 | 1.00th=[ 4228], 5.00th=[ 6259], 10.00th=[ 7177], 20.00th=[ 8094], 00:33:40.988 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9634], 00:33:40.988 | 70.00th=[10028], 80.00th=[10421], 90.00th=[11076], 95.00th=[11600], 00:33:40.988 | 99.00th=[48497], 99.50th=[50070], 99.90th=[50594], 99.95th=[51119], 00:33:40.988 | 99.99th=[51119] 00:33:40.988 bw ( KiB/s): min=24064, max=47360, per=32.61%, avg=38528.00, stdev=6091.93, samples=10 00:33:40.988 iops : min= 188, max= 370, avg=301.00, stdev=47.59, samples=10 00:33:40.988 lat (msec) : 4=0.93%, 10=69.41%, 20=27.34%, 50=1.86%, 100=0.46% 00:33:40.988 cpu : usr=95.00%, sys=4.68%, ctx=11, majf=0, minf=0 00:33:40.988 IO depths : 1=0.4%, 2=99.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:40.988 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.988 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:40.988 issued rwts: total=1507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:40.988 latency : target=0, window=0, percentile=100.00%, depth=3 00:33:40.988 00:33:40.988 Run status group 0 (all jobs): 00:33:40.988 READ: bw=115MiB/s (121MB/s), 37.3MiB/s-40.1MiB/s (39.1MB/s-42.1MB/s), io=582MiB (610MB), run=5005-5046msec 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.988 bdev_null0 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.988 [2024-11-20 09:17:56.209331] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.988 bdev_null1 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.988 bdev_null2 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:33:40.988 { 00:33:40.988 "params": { 00:33:40.988 "name": "Nvme$subsystem", 00:33:40.988 "trtype": "$TEST_TRANSPORT", 00:33:40.988 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:40.988 "adrfam": "ipv4", 00:33:40.988 "trsvcid": "$NVMF_PORT", 00:33:40.988 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:40.988 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:40.988 "hdgst": ${hdgst:-false}, 00:33:40.988 "ddgst": ${ddgst:-false} 00:33:40.988 }, 00:33:40.988 "method": "bdev_nvme_attach_controller" 00:33:40.988 } 00:33:40.988 EOF 00:33:40.988 )") 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:33:40.988 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:33:40.989 { 00:33:40.989 "params": { 00:33:40.989 "name": "Nvme$subsystem", 00:33:40.989 "trtype": "$TEST_TRANSPORT", 00:33:40.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:40.989 "adrfam": "ipv4", 00:33:40.989 "trsvcid": "$NVMF_PORT", 00:33:40.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:40.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:40.989 "hdgst": ${hdgst:-false}, 00:33:40.989 "ddgst": ${ddgst:-false} 00:33:40.989 }, 00:33:40.989 "method": "bdev_nvme_attach_controller" 00:33:40.989 } 00:33:40.989 EOF 00:33:40.989 )") 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:33:40.989 { 00:33:40.989 "params": { 00:33:40.989 "name": "Nvme$subsystem", 00:33:40.989 "trtype": "$TEST_TRANSPORT", 00:33:40.989 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:40.989 "adrfam": "ipv4", 00:33:40.989 "trsvcid": "$NVMF_PORT", 00:33:40.989 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:40.989 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:40.989 "hdgst": ${hdgst:-false}, 00:33:40.989 "ddgst": ${ddgst:-false} 00:33:40.989 }, 00:33:40.989 "method": "bdev_nvme_attach_controller" 00:33:40.989 } 00:33:40.989 EOF 00:33:40.989 )") 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:33:40.989 "params": { 00:33:40.989 "name": "Nvme0", 00:33:40.989 "trtype": "tcp", 00:33:40.989 "traddr": "10.0.0.2", 00:33:40.989 "adrfam": "ipv4", 00:33:40.989 "trsvcid": "4420", 00:33:40.989 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:40.989 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:40.989 "hdgst": false, 00:33:40.989 "ddgst": false 00:33:40.989 }, 00:33:40.989 "method": "bdev_nvme_attach_controller" 00:33:40.989 },{ 00:33:40.989 "params": { 00:33:40.989 "name": "Nvme1", 00:33:40.989 "trtype": "tcp", 00:33:40.989 "traddr": "10.0.0.2", 00:33:40.989 "adrfam": "ipv4", 00:33:40.989 "trsvcid": "4420", 00:33:40.989 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:40.989 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:40.989 "hdgst": false, 00:33:40.989 "ddgst": false 00:33:40.989 }, 00:33:40.989 "method": "bdev_nvme_attach_controller" 00:33:40.989 },{ 00:33:40.989 "params": { 00:33:40.989 "name": "Nvme2", 00:33:40.989 "trtype": "tcp", 00:33:40.989 "traddr": "10.0.0.2", 00:33:40.989 "adrfam": "ipv4", 00:33:40.989 "trsvcid": "4420", 00:33:40.989 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:33:40.989 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:33:40.989 "hdgst": false, 00:33:40.989 "ddgst": false 00:33:40.989 }, 00:33:40.989 "method": "bdev_nvme_attach_controller" 00:33:40.989 }' 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:40.989 09:17:56 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:40.989 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:40.989 ... 00:33:40.989 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:40.989 ... 00:33:40.989 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:33:40.989 ... 00:33:40.989 fio-3.35 00:33:40.989 Starting 24 threads 00:33:53.184 00:33:53.184 filename0: (groupid=0, jobs=1): err= 0: pid=2599519: Wed Nov 20 09:18:07 2024 00:33:53.184 read: IOPS=572, BW=2290KiB/s (2345kB/s)(22.4MiB/10004msec) 00:33:53.184 slat (nsec): min=7402, max=94720, avg=41659.60, stdev=22840.89 00:33:53.184 clat (usec): min=10496, max=31143, avg=27585.52, stdev=1630.27 00:33:53.184 lat (usec): min=10514, max=31179, avg=27627.18, stdev=1631.14 00:33:53.184 clat percentiles (usec): 00:33:53.184 | 1.00th=[15533], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:33:53.184 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:33:53.184 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:53.184 | 99.00th=[28705], 99.50th=[28967], 99.90th=[30802], 99.95th=[31065], 00:33:53.184 | 99.99th=[31065] 00:33:53.184 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2284.80, stdev=62.64, samples=20 00:33:53.185 iops : min= 544, max= 608, avg=571.20, stdev=15.66, samples=20 00:33:53.185 lat (msec) : 20=1.12%, 50=98.88% 00:33:53.185 cpu : usr=98.69%, sys=0.95%, ctx=13, majf=0, minf=35 00:33:53.185 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:53.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.185 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.185 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.185 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.185 filename0: (groupid=0, jobs=1): err= 0: pid=2599520: Wed Nov 20 09:18:07 2024 00:33:53.185 read: IOPS=572, BW=2290KiB/s (2345kB/s)(22.4MiB/10004msec) 00:33:53.185 slat (nsec): min=6931, max=95536, avg=31022.02, stdev=22045.25 00:33:53.185 clat (usec): min=10406, max=31438, avg=27715.15, stdev=1642.52 00:33:53.185 lat (usec): min=10422, max=31458, avg=27746.17, stdev=1640.95 00:33:53.185 clat percentiles (usec): 00:33:53.185 | 1.00th=[15664], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:33:53.185 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:53.185 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:33:53.185 | 99.00th=[28967], 99.50th=[29230], 99.90th=[31327], 99.95th=[31327], 00:33:53.185 | 99.99th=[31327] 00:33:53.185 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2284.80, stdev=62.64, samples=20 00:33:53.185 iops : min= 544, max= 608, avg=571.20, stdev=15.66, samples=20 00:33:53.185 lat (msec) : 20=1.12%, 50=98.88% 00:33:53.185 cpu : usr=98.39%, sys=1.21%, ctx=14, majf=0, minf=43 00:33:53.185 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:53.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.185 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.185 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.185 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.185 filename0: (groupid=0, jobs=1): err= 0: pid=2599521: Wed Nov 20 09:18:07 2024 00:33:53.185 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10006msec) 00:33:53.185 slat (usec): min=5, max=113, avg=45.59, stdev=25.45 00:33:53.185 clat (usec): min=15699, max=43081, avg=27636.72, stdev=1159.60 00:33:53.185 lat (usec): min=15722, max=43100, avg=27682.30, stdev=1159.84 00:33:53.185 clat percentiles (usec): 00:33:53.185 | 1.00th=[27132], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:33:53.185 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:33:53.185 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:33:53.185 | 99.00th=[29230], 99.50th=[30540], 99.90th=[43254], 99.95th=[43254], 00:33:53.185 | 99.99th=[43254] 00:33:53.185 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2270.53, stdev=57.55, samples=19 00:33:53.185 iops : min= 544, max= 576, avg=567.63, stdev=14.39, samples=19 00:33:53.185 lat (msec) : 20=0.35%, 50=99.65% 00:33:53.185 cpu : usr=98.75%, sys=0.87%, ctx=14, majf=0, minf=26 00:33:53.185 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:53.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.185 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.185 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.185 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.185 filename0: (groupid=0, jobs=1): err= 0: pid=2599522: Wed Nov 20 09:18:07 2024 00:33:53.185 read: IOPS=572, BW=2290KiB/s (2345kB/s)(22.4MiB/10004msec) 00:33:53.185 slat (nsec): min=7039, max=99626, avg=40181.17, stdev=20290.90 00:33:53.185 clat (usec): min=9618, max=46453, avg=27556.62, stdev=1840.00 00:33:53.185 lat (usec): min=9634, max=46467, avg=27596.80, stdev=1842.68 00:33:53.185 clat percentiles (usec): 00:33:53.185 | 1.00th=[15139], 5.00th=[27395], 10.00th=[27395], 20.00th=[27395], 00:33:53.185 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:33:53.185 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:33:53.185 | 99.00th=[28967], 99.50th=[30540], 99.90th=[41681], 99.95th=[41681], 00:33:53.185 | 99.99th=[46400] 00:33:53.185 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2284.80, stdev=62.64, samples=20 00:33:53.185 iops : min= 544, max= 608, avg=571.20, stdev=15.66, samples=20 00:33:53.185 lat (msec) : 10=0.12%, 20=1.20%, 50=98.67% 00:33:53.185 cpu : usr=98.71%, sys=0.94%, ctx=19, majf=0, minf=24 00:33:53.185 IO depths : 1=5.9%, 2=12.1%, 4=24.9%, 8=50.5%, 16=6.6%, 32=0.0%, >=64=0.0% 00:33:53.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.185 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.185 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.185 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.185 filename0: (groupid=0, jobs=1): err= 0: pid=2599523: Wed Nov 20 09:18:07 2024 00:33:53.185 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10006msec) 00:33:53.185 slat (usec): min=6, max=113, avg=48.52, stdev=24.44 00:33:53.185 clat (usec): min=15698, max=47255, avg=27649.60, stdev=1189.55 00:33:53.185 lat (usec): min=15754, max=47269, avg=27698.11, stdev=1188.22 00:33:53.185 clat percentiles (usec): 00:33:53.185 | 1.00th=[27132], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:33:53.185 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:33:53.185 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:53.185 | 99.00th=[29230], 99.50th=[30540], 99.90th=[43254], 99.95th=[43254], 00:33:53.185 | 99.99th=[47449] 00:33:53.185 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2270.53, stdev=57.55, samples=19 00:33:53.185 iops : min= 544, max= 576, avg=567.63, stdev=14.39, samples=19 00:33:53.185 lat (msec) : 20=0.37%, 50=99.63% 00:33:53.185 cpu : usr=98.78%, sys=0.84%, ctx=15, majf=0, minf=38 00:33:53.185 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:53.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.185 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.185 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.185 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.185 filename0: (groupid=0, jobs=1): err= 0: pid=2599524: Wed Nov 20 09:18:07 2024 00:33:53.185 read: IOPS=568, BW=2276KiB/s (2331kB/s)(22.2MiB/10011msec) 00:33:53.185 slat (usec): min=4, max=107, avg=40.03, stdev=25.05 00:33:53.185 clat (usec): min=15778, max=48792, avg=27848.31, stdev=1378.28 00:33:53.185 lat (usec): min=15799, max=48805, avg=27888.34, stdev=1372.92 00:33:53.185 clat percentiles (usec): 00:33:53.185 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:33:53.185 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:53.185 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:53.185 | 99.00th=[29230], 99.50th=[30802], 99.90th=[49021], 99.95th=[49021], 00:33:53.185 | 99.99th=[49021] 00:33:53.185 bw ( KiB/s): min= 2048, max= 2304, per=4.15%, avg=2270.32, stdev=71.93, samples=19 00:33:53.185 iops : min= 512, max= 576, avg=567.58, stdev=17.98, samples=19 00:33:53.185 lat (msec) : 20=0.32%, 50=99.68% 00:33:53.185 cpu : usr=98.73%, sys=0.90%, ctx=13, majf=0, minf=25 00:33:53.185 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:53.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.185 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.185 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.185 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.185 filename0: (groupid=0, jobs=1): err= 0: pid=2599525: Wed Nov 20 09:18:07 2024 00:33:53.185 read: IOPS=571, BW=2286KiB/s (2341kB/s)(22.4MiB/10016msec) 00:33:53.185 slat (nsec): min=6028, max=50791, avg=13475.08, stdev=3898.15 00:33:53.185 clat (usec): min=9698, max=40605, avg=27861.08, stdev=1462.45 00:33:53.185 lat (usec): min=9706, max=40612, avg=27874.55, stdev=1462.73 00:33:53.185 clat percentiles (usec): 00:33:53.185 | 1.00th=[20317], 5.00th=[27657], 10.00th=[27919], 20.00th=[27919], 00:33:53.185 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:53.185 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28443], 00:33:53.185 | 99.00th=[29754], 99.50th=[30802], 99.90th=[39584], 99.95th=[40633], 00:33:53.185 | 99.99th=[40633] 00:33:53.185 bw ( KiB/s): min= 2176, max= 2352, per=4.17%, avg=2282.11, stdev=58.23, samples=19 00:33:53.185 iops : min= 544, max= 588, avg=570.53, stdev=14.56, samples=19 00:33:53.185 lat (msec) : 10=0.23%, 20=0.56%, 50=99.21% 00:33:53.185 cpu : usr=98.59%, sys=1.03%, ctx=11, majf=0, minf=46 00:33:53.185 IO depths : 1=6.0%, 2=12.2%, 4=24.7%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:33:53.185 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.185 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.185 issued rwts: total=5724,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.185 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.185 filename0: (groupid=0, jobs=1): err= 0: pid=2599526: Wed Nov 20 09:18:07 2024 00:33:53.185 read: IOPS=569, BW=2277KiB/s (2331kB/s)(22.2MiB/10007msec) 00:33:53.185 slat (usec): min=5, max=113, avg=45.98, stdev=25.38 00:33:53.185 clat (usec): min=15738, max=43950, avg=27637.58, stdev=1195.28 00:33:53.185 lat (usec): min=15751, max=43965, avg=27683.56, stdev=1195.28 00:33:53.185 clat percentiles (usec): 00:33:53.185 | 1.00th=[27132], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:33:53.185 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:33:53.185 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:33:53.186 | 99.00th=[29230], 99.50th=[30540], 99.90th=[43779], 99.95th=[43779], 00:33:53.186 | 99.99th=[43779] 00:33:53.186 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2270.32, stdev=57.91, samples=19 00:33:53.186 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:33:53.186 lat (msec) : 20=0.37%, 50=99.63% 00:33:53.186 cpu : usr=98.73%, sys=0.89%, ctx=13, majf=0, minf=29 00:33:53.186 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:53.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.186 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.186 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.186 filename1: (groupid=0, jobs=1): err= 0: pid=2599527: Wed Nov 20 09:18:07 2024 00:33:53.186 read: IOPS=571, BW=2287KiB/s (2342kB/s)(22.4MiB/10011msec) 00:33:53.186 slat (nsec): min=6796, max=39522, avg=13176.52, stdev=3519.34 00:33:53.186 clat (usec): min=11548, max=46850, avg=27867.30, stdev=1514.53 00:33:53.186 lat (usec): min=11556, max=46865, avg=27880.47, stdev=1514.89 00:33:53.186 clat percentiles (usec): 00:33:53.186 | 1.00th=[20055], 5.00th=[27657], 10.00th=[27919], 20.00th=[27919], 00:33:53.186 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:53.186 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28181], 00:33:53.186 | 99.00th=[29230], 99.50th=[31065], 99.90th=[46924], 99.95th=[46924], 00:33:53.186 | 99.99th=[46924] 00:33:53.186 bw ( KiB/s): min= 2176, max= 2400, per=4.17%, avg=2282.11, stdev=60.39, samples=19 00:33:53.186 iops : min= 544, max= 600, avg=570.53, stdev=15.10, samples=19 00:33:53.186 lat (msec) : 20=0.91%, 50=99.09% 00:33:53.186 cpu : usr=98.87%, sys=0.76%, ctx=15, majf=0, minf=54 00:33:53.186 IO depths : 1=6.0%, 2=12.1%, 4=24.5%, 8=50.9%, 16=6.5%, 32=0.0%, >=64=0.0% 00:33:53.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.186 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.186 issued rwts: total=5724,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.186 filename1: (groupid=0, jobs=1): err= 0: pid=2599528: Wed Nov 20 09:18:07 2024 00:33:53.186 read: IOPS=569, BW=2277KiB/s (2331kB/s)(22.2MiB/10004msec) 00:33:53.186 slat (usec): min=6, max=105, avg=41.65, stdev=23.61 00:33:53.186 clat (usec): min=11962, max=59140, avg=27680.16, stdev=1988.13 00:33:53.186 lat (usec): min=11970, max=59187, avg=27721.81, stdev=1990.21 00:33:53.186 clat percentiles (usec): 00:33:53.186 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27395], 00:33:53.186 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:33:53.186 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:33:53.186 | 99.00th=[28967], 99.50th=[30802], 99.90th=[58983], 99.95th=[58983], 00:33:53.186 | 99.99th=[58983] 00:33:53.186 bw ( KiB/s): min= 2164, max= 2304, per=4.14%, avg=2269.68, stdev=59.05, samples=19 00:33:53.186 iops : min= 541, max= 576, avg=567.42, stdev=14.76, samples=19 00:33:53.186 lat (msec) : 20=0.77%, 50=98.95%, 100=0.28% 00:33:53.186 cpu : usr=98.67%, sys=0.97%, ctx=9, majf=0, minf=28 00:33:53.186 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:53.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.186 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.186 issued rwts: total=5694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.186 filename1: (groupid=0, jobs=1): err= 0: pid=2599529: Wed Nov 20 09:18:07 2024 00:33:53.186 read: IOPS=569, BW=2277KiB/s (2331kB/s)(22.2MiB/10004msec) 00:33:53.186 slat (nsec): min=6687, max=96774, avg=42096.17, stdev=23319.07 00:33:53.186 clat (usec): min=13906, max=59358, avg=27683.56, stdev=2013.67 00:33:53.186 lat (usec): min=13953, max=59384, avg=27725.66, stdev=2013.90 00:33:53.186 clat percentiles (usec): 00:33:53.186 | 1.00th=[26870], 5.00th=[27395], 10.00th=[27395], 20.00th=[27395], 00:33:53.186 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:33:53.186 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:33:53.186 | 99.00th=[28967], 99.50th=[30802], 99.90th=[59507], 99.95th=[59507], 00:33:53.186 | 99.99th=[59507] 00:33:53.186 bw ( KiB/s): min= 2160, max= 2304, per=4.14%, avg=2269.47, stdev=59.45, samples=19 00:33:53.186 iops : min= 540, max= 576, avg=567.37, stdev=14.86, samples=19 00:33:53.186 lat (msec) : 20=0.81%, 50=98.91%, 100=0.28% 00:33:53.186 cpu : usr=98.61%, sys=1.00%, ctx=11, majf=0, minf=25 00:33:53.186 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:53.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.186 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.186 issued rwts: total=5694,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.186 filename1: (groupid=0, jobs=1): err= 0: pid=2599530: Wed Nov 20 09:18:07 2024 00:33:53.186 read: IOPS=570, BW=2281KiB/s (2336kB/s)(22.3MiB/10016msec) 00:33:53.186 slat (usec): min=6, max=103, avg=16.63, stdev=12.96 00:33:53.186 clat (usec): min=16027, max=31167, avg=27926.68, stdev=829.14 00:33:53.186 lat (usec): min=16036, max=31191, avg=27943.31, stdev=828.36 00:33:53.186 clat percentiles (usec): 00:33:53.186 | 1.00th=[27132], 5.00th=[27657], 10.00th=[27657], 20.00th=[27919], 00:33:53.186 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:53.186 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:33:53.186 | 99.00th=[29230], 99.50th=[30016], 99.90th=[31065], 99.95th=[31065], 00:33:53.186 | 99.99th=[31065] 00:33:53.186 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2277.05, stdev=53.61, samples=19 00:33:53.186 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:33:53.186 lat (msec) : 20=0.28%, 50=99.72% 00:33:53.186 cpu : usr=98.52%, sys=1.10%, ctx=15, majf=0, minf=42 00:33:53.186 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:53.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.186 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.186 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.186 filename1: (groupid=0, jobs=1): err= 0: pid=2599531: Wed Nov 20 09:18:07 2024 00:33:53.186 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10005msec) 00:33:53.186 slat (usec): min=7, max=113, avg=46.32, stdev=24.69 00:33:53.186 clat (usec): min=15802, max=41888, avg=27752.94, stdev=1109.99 00:33:53.186 lat (usec): min=15834, max=41906, avg=27799.26, stdev=1105.00 00:33:53.186 clat percentiles (usec): 00:33:53.186 | 1.00th=[27132], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:33:53.186 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:33:53.186 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:53.186 | 99.00th=[29230], 99.50th=[30802], 99.90th=[41681], 99.95th=[41681], 00:33:53.186 | 99.99th=[41681] 00:33:53.186 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2270.32, stdev=57.91, samples=19 00:33:53.186 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:33:53.186 lat (msec) : 20=0.30%, 50=99.70% 00:33:53.186 cpu : usr=98.76%, sys=0.86%, ctx=16, majf=0, minf=31 00:33:53.186 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:53.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.186 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.186 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.186 filename1: (groupid=0, jobs=1): err= 0: pid=2599532: Wed Nov 20 09:18:07 2024 00:33:53.186 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10006msec) 00:33:53.186 slat (usec): min=6, max=113, avg=47.05, stdev=25.11 00:33:53.186 clat (usec): min=15674, max=47266, avg=27632.75, stdev=1208.41 00:33:53.186 lat (usec): min=15688, max=47281, avg=27679.80, stdev=1208.43 00:33:53.186 clat percentiles (usec): 00:33:53.186 | 1.00th=[27132], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:33:53.186 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:33:53.186 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:33:53.186 | 99.00th=[29492], 99.50th=[30802], 99.90th=[43254], 99.95th=[43254], 00:33:53.186 | 99.99th=[47449] 00:33:53.186 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2270.53, stdev=57.55, samples=19 00:33:53.186 iops : min= 544, max= 576, avg=567.63, stdev=14.39, samples=19 00:33:53.186 lat (msec) : 20=0.32%, 50=99.68% 00:33:53.186 cpu : usr=98.66%, sys=0.97%, ctx=8, majf=0, minf=25 00:33:53.186 IO depths : 1=6.2%, 2=12.4%, 4=24.9%, 8=50.2%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:53.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.186 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.186 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.186 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.186 filename1: (groupid=0, jobs=1): err= 0: pid=2599533: Wed Nov 20 09:18:07 2024 00:33:53.186 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10004msec) 00:33:53.186 slat (nsec): min=9350, max=81753, avg=39071.32, stdev=15609.64 00:33:53.186 clat (usec): min=15420, max=41917, avg=27758.85, stdev=1110.55 00:33:53.186 lat (usec): min=15464, max=41934, avg=27797.92, stdev=1109.17 00:33:53.186 clat percentiles (usec): 00:33:53.186 | 1.00th=[27132], 5.00th=[27395], 10.00th=[27395], 20.00th=[27395], 00:33:53.186 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:33:53.186 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:33:53.186 | 99.00th=[29230], 99.50th=[30802], 99.90th=[41681], 99.95th=[41681], 00:33:53.186 | 99.99th=[41681] 00:33:53.186 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2270.32, stdev=57.91, samples=19 00:33:53.186 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:33:53.186 lat (msec) : 20=0.28%, 50=99.72% 00:33:53.186 cpu : usr=98.64%, sys=1.02%, ctx=22, majf=0, minf=44 00:33:53.186 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:53.186 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.186 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.186 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.187 filename1: (groupid=0, jobs=1): err= 0: pid=2599534: Wed Nov 20 09:18:07 2024 00:33:53.187 read: IOPS=572, BW=2290KiB/s (2345kB/s)(22.4MiB/10004msec) 00:33:53.187 slat (nsec): min=7392, max=80232, avg=34711.89, stdev=14778.47 00:33:53.187 clat (usec): min=10522, max=31227, avg=27642.06, stdev=1621.21 00:33:53.187 lat (usec): min=10564, max=31253, avg=27676.77, stdev=1621.75 00:33:53.187 clat percentiles (usec): 00:33:53.187 | 1.00th=[15533], 5.00th=[27395], 10.00th=[27395], 20.00th=[27657], 00:33:53.187 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27657], 60.00th=[27919], 00:33:53.187 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:53.187 | 99.00th=[28705], 99.50th=[29230], 99.90th=[31065], 99.95th=[31065], 00:33:53.187 | 99.99th=[31327] 00:33:53.187 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2284.80, stdev=62.64, samples=20 00:33:53.187 iops : min= 544, max= 608, avg=571.20, stdev=15.66, samples=20 00:33:53.187 lat (msec) : 20=1.12%, 50=98.88% 00:33:53.187 cpu : usr=98.00%, sys=1.37%, ctx=189, majf=0, minf=34 00:33:53.187 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:53.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.187 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.187 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.187 filename2: (groupid=0, jobs=1): err= 0: pid=2599535: Wed Nov 20 09:18:07 2024 00:33:53.187 read: IOPS=573, BW=2293KiB/s (2348kB/s)(22.4MiB/10004msec) 00:33:53.187 slat (nsec): min=6908, max=95738, avg=43233.04, stdev=22727.54 00:33:53.187 clat (usec): min=10216, max=46375, avg=27486.95, stdev=2077.97 00:33:53.187 lat (usec): min=10223, max=46394, avg=27530.18, stdev=2081.13 00:33:53.187 clat percentiles (usec): 00:33:53.187 | 1.00th=[15139], 5.00th=[27132], 10.00th=[27395], 20.00th=[27395], 00:33:53.187 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:33:53.187 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:33:53.187 | 99.00th=[28967], 99.50th=[31327], 99.90th=[46400], 99.95th=[46400], 00:33:53.187 | 99.99th=[46400] 00:33:53.187 bw ( KiB/s): min= 2176, max= 2480, per=4.18%, avg=2287.20, stdev=69.16, samples=20 00:33:53.187 iops : min= 544, max= 620, avg=571.80, stdev=17.29, samples=20 00:33:53.187 lat (msec) : 20=1.64%, 50=98.36% 00:33:53.187 cpu : usr=98.69%, sys=0.94%, ctx=10, majf=0, minf=28 00:33:53.187 IO depths : 1=5.9%, 2=12.1%, 4=24.7%, 8=50.7%, 16=6.6%, 32=0.0%, >=64=0.0% 00:33:53.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.187 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.187 issued rwts: total=5734,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.187 filename2: (groupid=0, jobs=1): err= 0: pid=2599536: Wed Nov 20 09:18:07 2024 00:33:53.187 read: IOPS=571, BW=2287KiB/s (2341kB/s)(22.4MiB/10017msec) 00:33:53.187 slat (usec): min=6, max=106, avg=32.27, stdev=23.96 00:33:53.187 clat (usec): min=9683, max=34025, avg=27744.11, stdev=1354.60 00:33:53.187 lat (usec): min=9692, max=34056, avg=27776.38, stdev=1352.96 00:33:53.187 clat percentiles (usec): 00:33:53.187 | 1.00th=[20579], 5.00th=[27132], 10.00th=[27395], 20.00th=[27657], 00:33:53.187 | 30.00th=[27657], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:53.187 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:33:53.187 | 99.00th=[30802], 99.50th=[31065], 99.90th=[32900], 99.95th=[33162], 00:33:53.187 | 99.99th=[33817] 00:33:53.187 bw ( KiB/s): min= 2176, max= 2480, per=4.18%, avg=2286.32, stdev=70.93, samples=19 00:33:53.187 iops : min= 544, max= 620, avg=571.58, stdev=17.73, samples=19 00:33:53.187 lat (msec) : 10=0.12%, 20=0.40%, 50=99.48% 00:33:53.187 cpu : usr=98.63%, sys=0.99%, ctx=14, majf=0, minf=28 00:33:53.187 IO depths : 1=5.6%, 2=11.7%, 4=24.5%, 8=51.3%, 16=6.9%, 32=0.0%, >=64=0.0% 00:33:53.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.187 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.187 issued rwts: total=5726,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.187 filename2: (groupid=0, jobs=1): err= 0: pid=2599537: Wed Nov 20 09:18:07 2024 00:33:53.187 read: IOPS=569, BW=2277KiB/s (2332kB/s)(22.2MiB/10005msec) 00:33:53.187 slat (usec): min=9, max=112, avg=48.77, stdev=24.42 00:33:53.187 clat (usec): min=15656, max=45948, avg=27649.14, stdev=1145.81 00:33:53.187 lat (usec): min=15679, max=45964, avg=27697.91, stdev=1144.42 00:33:53.187 clat percentiles (usec): 00:33:53.187 | 1.00th=[27132], 5.00th=[27132], 10.00th=[27132], 20.00th=[27395], 00:33:53.187 | 30.00th=[27395], 40.00th=[27395], 50.00th=[27657], 60.00th=[27657], 00:33:53.187 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:53.187 | 99.00th=[29230], 99.50th=[30540], 99.90th=[41681], 99.95th=[41681], 00:33:53.187 | 99.99th=[45876] 00:33:53.187 bw ( KiB/s): min= 2176, max= 2304, per=4.15%, avg=2270.32, stdev=57.91, samples=19 00:33:53.187 iops : min= 544, max= 576, avg=567.58, stdev=14.48, samples=19 00:33:53.187 lat (msec) : 20=0.37%, 50=99.63% 00:33:53.187 cpu : usr=98.51%, sys=1.10%, ctx=33, majf=0, minf=47 00:33:53.187 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:53.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.187 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.187 issued rwts: total=5696,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.187 filename2: (groupid=0, jobs=1): err= 0: pid=2599538: Wed Nov 20 09:18:07 2024 00:33:53.187 read: IOPS=572, BW=2290KiB/s (2345kB/s)(22.4MiB/10004msec) 00:33:53.187 slat (usec): min=6, max=103, avg=38.77, stdev=21.77 00:33:53.187 clat (usec): min=9617, max=69866, avg=27589.13, stdev=2959.22 00:33:53.187 lat (usec): min=9626, max=69881, avg=27627.90, stdev=2959.83 00:33:53.187 clat percentiles (usec): 00:33:53.187 | 1.00th=[16712], 5.00th=[22938], 10.00th=[27395], 20.00th=[27395], 00:33:53.187 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:33:53.187 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:53.187 | 99.00th=[38011], 99.50th=[40633], 99.90th=[59507], 99.95th=[69731], 00:33:53.187 | 99.99th=[69731] 00:33:53.187 bw ( KiB/s): min= 2096, max= 2480, per=4.17%, avg=2283.79, stdev=81.03, samples=19 00:33:53.187 iops : min= 524, max= 620, avg=570.95, stdev=20.26, samples=19 00:33:53.187 lat (msec) : 10=0.21%, 20=1.29%, 50=98.22%, 100=0.28% 00:33:53.187 cpu : usr=98.53%, sys=1.01%, ctx=83, majf=0, minf=30 00:33:53.187 IO depths : 1=5.3%, 2=11.0%, 4=23.2%, 8=53.2%, 16=7.3%, 32=0.0%, >=64=0.0% 00:33:53.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.187 complete : 0=0.0%, 4=93.6%, 8=0.7%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.187 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.187 filename2: (groupid=0, jobs=1): err= 0: pid=2599539: Wed Nov 20 09:18:07 2024 00:33:53.187 read: IOPS=570, BW=2281KiB/s (2336kB/s)(22.3MiB/10016msec) 00:33:53.187 slat (usec): min=5, max=106, avg=39.12, stdev=24.17 00:33:53.187 clat (usec): min=15705, max=31205, avg=27770.25, stdev=847.38 00:33:53.187 lat (usec): min=15736, max=31218, avg=27809.37, stdev=842.20 00:33:53.187 clat percentiles (usec): 00:33:53.187 | 1.00th=[27132], 5.00th=[27132], 10.00th=[27395], 20.00th=[27657], 00:33:53.187 | 30.00th=[27657], 40.00th=[27657], 50.00th=[27919], 60.00th=[27919], 00:33:53.187 | 70.00th=[27919], 80.00th=[27919], 90.00th=[28181], 95.00th=[28181], 00:33:53.187 | 99.00th=[28967], 99.50th=[30016], 99.90th=[31065], 99.95th=[31065], 00:33:53.187 | 99.99th=[31327] 00:33:53.187 bw ( KiB/s): min= 2176, max= 2304, per=4.16%, avg=2277.05, stdev=53.61, samples=19 00:33:53.187 iops : min= 544, max= 576, avg=569.26, stdev=13.40, samples=19 00:33:53.187 lat (msec) : 20=0.30%, 50=99.70% 00:33:53.187 cpu : usr=98.64%, sys=0.97%, ctx=12, majf=0, minf=35 00:33:53.187 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:53.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.187 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.187 issued rwts: total=5712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.187 filename2: (groupid=0, jobs=1): err= 0: pid=2599540: Wed Nov 20 09:18:07 2024 00:33:53.187 read: IOPS=572, BW=2290KiB/s (2345kB/s)(22.4MiB/10004msec) 00:33:53.187 slat (nsec): min=6832, max=81748, avg=11180.31, stdev=5080.72 00:33:53.187 clat (usec): min=10395, max=31336, avg=27844.30, stdev=1637.70 00:33:53.187 lat (usec): min=10417, max=31347, avg=27855.49, stdev=1636.24 00:33:53.187 clat percentiles (usec): 00:33:53.187 | 1.00th=[15664], 5.00th=[27657], 10.00th=[27919], 20.00th=[27919], 00:33:53.187 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:53.187 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:33:53.187 | 99.00th=[28967], 99.50th=[29230], 99.90th=[31327], 99.95th=[31327], 00:33:53.187 | 99.99th=[31327] 00:33:53.187 bw ( KiB/s): min= 2176, max= 2432, per=4.18%, avg=2290.53, stdev=58.73, samples=19 00:33:53.187 iops : min= 544, max= 608, avg=572.63, stdev=14.68, samples=19 00:33:53.187 lat (msec) : 20=1.12%, 50=98.88% 00:33:53.187 cpu : usr=98.56%, sys=1.07%, ctx=13, majf=0, minf=55 00:33:53.187 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:53.187 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.187 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.187 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.187 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.187 filename2: (groupid=0, jobs=1): err= 0: pid=2599541: Wed Nov 20 09:18:07 2024 00:33:53.187 read: IOPS=572, BW=2290KiB/s (2345kB/s)(22.4MiB/10004msec) 00:33:53.187 slat (nsec): min=6742, max=96866, avg=21757.46, stdev=18790.26 00:33:53.187 clat (usec): min=10438, max=31404, avg=27777.19, stdev=1637.13 00:33:53.187 lat (usec): min=10479, max=31430, avg=27798.95, stdev=1635.28 00:33:53.187 clat percentiles (usec): 00:33:53.187 | 1.00th=[15664], 5.00th=[27395], 10.00th=[27657], 20.00th=[27919], 00:33:53.187 | 30.00th=[27919], 40.00th=[27919], 50.00th=[27919], 60.00th=[27919], 00:33:53.187 | 70.00th=[27919], 80.00th=[28181], 90.00th=[28181], 95.00th=[28443], 00:33:53.187 | 99.00th=[28967], 99.50th=[28967], 99.90th=[31327], 99.95th=[31327], 00:33:53.188 | 99.99th=[31327] 00:33:53.188 bw ( KiB/s): min= 2176, max= 2432, per=4.18%, avg=2290.53, stdev=58.73, samples=19 00:33:53.188 iops : min= 544, max= 608, avg=572.63, stdev=14.68, samples=19 00:33:53.188 lat (msec) : 20=1.12%, 50=98.88% 00:33:53.188 cpu : usr=98.74%, sys=0.89%, ctx=13, majf=0, minf=38 00:33:53.188 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:33:53.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.188 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.188 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.188 filename2: (groupid=0, jobs=1): err= 0: pid=2599542: Wed Nov 20 09:18:07 2024 00:33:53.188 read: IOPS=572, BW=2290KiB/s (2345kB/s)(22.4MiB/10004msec) 00:33:53.188 slat (usec): min=7, max=191, avg=42.87, stdev=22.84 00:33:53.188 clat (usec): min=10285, max=31229, avg=27518.78, stdev=1611.80 00:33:53.188 lat (usec): min=10299, max=31270, avg=27561.64, stdev=1614.69 00:33:53.188 clat percentiles (usec): 00:33:53.188 | 1.00th=[15533], 5.00th=[27395], 10.00th=[27395], 20.00th=[27395], 00:33:53.188 | 30.00th=[27395], 40.00th=[27657], 50.00th=[27657], 60.00th=[27657], 00:33:53.188 | 70.00th=[27919], 80.00th=[27919], 90.00th=[27919], 95.00th=[28181], 00:33:53.188 | 99.00th=[28705], 99.50th=[29230], 99.90th=[31065], 99.95th=[31065], 00:33:53.188 | 99.99th=[31327] 00:33:53.188 bw ( KiB/s): min= 2176, max= 2432, per=4.17%, avg=2284.80, stdev=62.64, samples=20 00:33:53.188 iops : min= 544, max= 608, avg=571.20, stdev=15.66, samples=20 00:33:53.188 lat (msec) : 20=1.12%, 50=98.88% 00:33:53.188 cpu : usr=98.70%, sys=0.94%, ctx=12, majf=0, minf=36 00:33:53.188 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:33:53.188 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.188 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:53.188 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:53.188 latency : target=0, window=0, percentile=100.00%, depth=16 00:33:53.188 00:33:53.188 Run status group 0 (all jobs): 00:33:53.188 READ: bw=53.5MiB/s (56.1MB/s), 2276KiB/s-2293KiB/s (2331kB/s-2348kB/s), io=536MiB (562MB), run=10004-10017msec 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.188 09:18:07 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:53.188 bdev_null0 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:53.188 [2024-11-20 09:18:08.025840] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:53.188 bdev_null1 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:53.188 09:18:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:33:53.188 { 00:33:53.188 "params": { 00:33:53.188 "name": "Nvme$subsystem", 00:33:53.188 "trtype": "$TEST_TRANSPORT", 00:33:53.188 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:53.188 "adrfam": "ipv4", 00:33:53.188 "trsvcid": "$NVMF_PORT", 00:33:53.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:53.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:53.189 "hdgst": ${hdgst:-false}, 00:33:53.189 "ddgst": ${ddgst:-false} 00:33:53.189 }, 00:33:53.189 "method": "bdev_nvme_attach_controller" 00:33:53.189 } 00:33:53.189 EOF 00:33:53.189 )") 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:33:53.189 { 00:33:53.189 "params": { 00:33:53.189 "name": "Nvme$subsystem", 00:33:53.189 "trtype": "$TEST_TRANSPORT", 00:33:53.189 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:53.189 "adrfam": "ipv4", 00:33:53.189 "trsvcid": "$NVMF_PORT", 00:33:53.189 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:53.189 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:53.189 "hdgst": ${hdgst:-false}, 00:33:53.189 "ddgst": ${ddgst:-false} 00:33:53.189 }, 00:33:53.189 "method": "bdev_nvme_attach_controller" 00:33:53.189 } 00:33:53.189 EOF 00:33:53.189 )") 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:33:53.189 "params": { 00:33:53.189 "name": "Nvme0", 00:33:53.189 "trtype": "tcp", 00:33:53.189 "traddr": "10.0.0.2", 00:33:53.189 "adrfam": "ipv4", 00:33:53.189 "trsvcid": "4420", 00:33:53.189 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:53.189 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:53.189 "hdgst": false, 00:33:53.189 "ddgst": false 00:33:53.189 }, 00:33:53.189 "method": "bdev_nvme_attach_controller" 00:33:53.189 },{ 00:33:53.189 "params": { 00:33:53.189 "name": "Nvme1", 00:33:53.189 "trtype": "tcp", 00:33:53.189 "traddr": "10.0.0.2", 00:33:53.189 "adrfam": "ipv4", 00:33:53.189 "trsvcid": "4420", 00:33:53.189 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:33:53.189 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:33:53.189 "hdgst": false, 00:33:53.189 "ddgst": false 00:33:53.189 }, 00:33:53.189 "method": "bdev_nvme_attach_controller" 00:33:53.189 }' 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:53.189 09:18:08 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:53.189 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:53.189 ... 00:33:53.189 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:33:53.189 ... 00:33:53.189 fio-3.35 00:33:53.189 Starting 4 threads 00:33:58.456 00:33:58.456 filename0: (groupid=0, jobs=1): err= 0: pid=2601777: Wed Nov 20 09:18:14 2024 00:33:58.456 read: IOPS=2660, BW=20.8MiB/s (21.8MB/s)(105MiB/5043msec) 00:33:58.456 slat (nsec): min=6101, max=37624, avg=9711.58, stdev=4045.21 00:33:58.456 clat (usec): min=1056, max=45008, avg=2965.87, stdev=862.39 00:33:58.456 lat (usec): min=1068, max=45026, avg=2975.59, stdev=862.47 00:33:58.456 clat percentiles (usec): 00:33:58.456 | 1.00th=[ 1909], 5.00th=[ 2212], 10.00th=[ 2376], 20.00th=[ 2573], 00:33:58.456 | 30.00th=[ 2737], 40.00th=[ 2868], 50.00th=[ 2966], 60.00th=[ 3064], 00:33:58.456 | 70.00th=[ 3097], 80.00th=[ 3195], 90.00th=[ 3490], 95.00th=[ 3752], 00:33:58.456 | 99.00th=[ 4555], 99.50th=[ 4948], 99.90th=[ 5538], 99.95th=[ 5604], 00:33:58.456 | 99.99th=[44827] 00:33:58.456 bw ( KiB/s): min=19872, max=23888, per=25.60%, avg=21464.80, stdev=1203.64, samples=10 00:33:58.456 iops : min= 2484, max= 2986, avg=2683.10, stdev=150.45, samples=10 00:33:58.456 lat (msec) : 2=1.36%, 4=95.51%, 10=3.10%, 50=0.03% 00:33:58.456 cpu : usr=96.41%, sys=3.23%, ctx=8, majf=0, minf=0 00:33:58.457 IO depths : 1=0.3%, 2=4.5%, 4=67.0%, 8=28.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:58.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.457 complete : 0=0.0%, 4=92.8%, 8=7.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.457 issued rwts: total=13417,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.457 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:58.457 filename0: (groupid=0, jobs=1): err= 0: pid=2601778: Wed Nov 20 09:18:14 2024 00:33:58.457 read: IOPS=2570, BW=20.1MiB/s (21.1MB/s)(100MiB/5001msec) 00:33:58.457 slat (nsec): min=6105, max=39437, avg=10197.34, stdev=4373.94 00:33:58.457 clat (usec): min=588, max=5701, avg=3081.01, stdev=489.06 00:33:58.457 lat (usec): min=598, max=5714, avg=3091.20, stdev=488.71 00:33:58.457 clat percentiles (usec): 00:33:58.457 | 1.00th=[ 2057], 5.00th=[ 2409], 10.00th=[ 2573], 20.00th=[ 2769], 00:33:58.457 | 30.00th=[ 2868], 40.00th=[ 2966], 50.00th=[ 3032], 60.00th=[ 3097], 00:33:58.457 | 70.00th=[ 3195], 80.00th=[ 3359], 90.00th=[ 3654], 95.00th=[ 4015], 00:33:58.457 | 99.00th=[ 4752], 99.50th=[ 5080], 99.90th=[ 5604], 99.95th=[ 5669], 00:33:58.457 | 99.99th=[ 5669] 00:33:58.457 bw ( KiB/s): min=19552, max=21296, per=24.39%, avg=20448.78, stdev=659.67, samples=9 00:33:58.457 iops : min= 2444, max= 2662, avg=2556.00, stdev=82.58, samples=9 00:33:58.457 lat (usec) : 750=0.01%, 1000=0.01% 00:33:58.457 lat (msec) : 2=0.75%, 4=94.13%, 10=5.10% 00:33:58.457 cpu : usr=96.44%, sys=3.22%, ctx=11, majf=0, minf=0 00:33:58.457 IO depths : 1=0.5%, 2=5.4%, 4=66.8%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:58.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.457 complete : 0=0.0%, 4=92.2%, 8=7.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.457 issued rwts: total=12856,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.457 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:58.457 filename1: (groupid=0, jobs=1): err= 0: pid=2601779: Wed Nov 20 09:18:14 2024 00:33:58.457 read: IOPS=2839, BW=22.2MiB/s (23.3MB/s)(111MiB/5002msec) 00:33:58.457 slat (nsec): min=6115, max=49200, avg=11046.08, stdev=5172.80 00:33:58.457 clat (usec): min=703, max=5121, avg=2778.99, stdev=406.13 00:33:58.457 lat (usec): min=714, max=5127, avg=2790.04, stdev=406.66 00:33:58.457 clat percentiles (usec): 00:33:58.457 | 1.00th=[ 1827], 5.00th=[ 2147], 10.00th=[ 2278], 20.00th=[ 2442], 00:33:58.457 | 30.00th=[ 2573], 40.00th=[ 2704], 50.00th=[ 2802], 60.00th=[ 2900], 00:33:58.457 | 70.00th=[ 2999], 80.00th=[ 3097], 90.00th=[ 3228], 95.00th=[ 3392], 00:33:58.457 | 99.00th=[ 3884], 99.50th=[ 4228], 99.90th=[ 4817], 99.95th=[ 4948], 00:33:58.457 | 99.99th=[ 5145] 00:33:58.457 bw ( KiB/s): min=21120, max=24736, per=27.24%, avg=22840.89, stdev=1046.90, samples=9 00:33:58.457 iops : min= 2640, max= 3092, avg=2855.11, stdev=130.86, samples=9 00:33:58.457 lat (usec) : 750=0.01%, 1000=0.01% 00:33:58.457 lat (msec) : 2=2.24%, 4=97.02%, 10=0.73% 00:33:58.457 cpu : usr=93.10%, sys=4.76%, ctx=226, majf=0, minf=0 00:33:58.457 IO depths : 1=0.5%, 2=15.0%, 4=57.4%, 8=27.2%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:58.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.457 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.457 issued rwts: total=14205,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.457 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:58.457 filename1: (groupid=0, jobs=1): err= 0: pid=2601780: Wed Nov 20 09:18:14 2024 00:33:58.457 read: IOPS=2474, BW=19.3MiB/s (20.3MB/s)(96.7MiB/5001msec) 00:33:58.457 slat (nsec): min=6128, max=38834, avg=10516.73, stdev=4987.59 00:33:58.457 clat (usec): min=667, max=5814, avg=3200.93, stdev=497.92 00:33:58.457 lat (usec): min=678, max=5821, avg=3211.45, stdev=497.32 00:33:58.457 clat percentiles (usec): 00:33:58.457 | 1.00th=[ 2278], 5.00th=[ 2638], 10.00th=[ 2769], 20.00th=[ 2868], 00:33:58.457 | 30.00th=[ 2966], 40.00th=[ 3032], 50.00th=[ 3097], 60.00th=[ 3163], 00:33:58.457 | 70.00th=[ 3261], 80.00th=[ 3458], 90.00th=[ 3818], 95.00th=[ 4293], 00:33:58.457 | 99.00th=[ 5014], 99.50th=[ 5276], 99.90th=[ 5604], 99.95th=[ 5604], 00:33:58.457 | 99.99th=[ 5800] 00:33:58.457 bw ( KiB/s): min=19184, max=20176, per=23.44%, avg=19653.33, stdev=349.63, samples=9 00:33:58.457 iops : min= 2398, max= 2522, avg=2456.67, stdev=43.70, samples=9 00:33:58.457 lat (usec) : 750=0.01%, 1000=0.06% 00:33:58.457 lat (msec) : 2=0.16%, 4=92.28%, 10=7.48% 00:33:58.457 cpu : usr=96.42%, sys=3.24%, ctx=6, majf=0, minf=0 00:33:58.457 IO depths : 1=0.2%, 2=2.9%, 4=69.8%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:33:58.457 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.457 complete : 0=0.0%, 4=91.8%, 8=8.2%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:33:58.457 issued rwts: total=12376,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:33:58.457 latency : target=0, window=0, percentile=100.00%, depth=8 00:33:58.457 00:33:58.457 Run status group 0 (all jobs): 00:33:58.457 READ: bw=81.9MiB/s (85.9MB/s), 19.3MiB/s-22.2MiB/s (20.3MB/s-23.3MB/s), io=413MiB (433MB), run=5001-5043msec 00:33:58.716 09:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:33:58.716 09:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:33:58.716 09:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:58.716 09:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:33:58.716 09:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:33:58.716 09:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:33:58.716 09:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.716 09:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.716 09:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.716 09:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:33:58.716 09:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.716 09:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.716 09:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.716 09:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:33:58.716 09:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:33:58.716 09:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:33:58.716 09:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:58.716 09:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.716 09:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.716 09:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.716 09:18:14 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:33:58.717 09:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.717 09:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.717 09:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.717 00:33:58.717 real 0m24.561s 00:33:58.717 user 4m52.291s 00:33:58.717 sys 0m4.907s 00:33:58.717 09:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:58.717 09:18:14 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:33:58.717 ************************************ 00:33:58.717 END TEST fio_dif_rand_params 00:33:58.717 ************************************ 00:33:58.717 09:18:14 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:33:58.717 09:18:14 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:58.717 09:18:14 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:58.717 09:18:14 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:58.717 ************************************ 00:33:58.717 START TEST fio_dif_digest 00:33:58.717 ************************************ 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:58.717 bdev_null0 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:33:58.717 [2024-11-20 09:18:14.719955] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@372 -- # config=() 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@372 -- # local subsystem config 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:33:58.717 { 00:33:58.717 "params": { 00:33:58.717 "name": "Nvme$subsystem", 00:33:58.717 "trtype": "$TEST_TRANSPORT", 00:33:58.717 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:58.717 "adrfam": "ipv4", 00:33:58.717 "trsvcid": "$NVMF_PORT", 00:33:58.717 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:58.717 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:58.717 "hdgst": ${hdgst:-false}, 00:33:58.717 "ddgst": ${ddgst:-false} 00:33:58.717 }, 00:33:58.717 "method": "bdev_nvme_attach_controller" 00:33:58.717 } 00:33:58.717 EOF 00:33:58.717 )") 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@394 -- # cat 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@396 -- # jq . 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@397 -- # IFS=, 00:33:58.717 09:18:14 nvmf_dif.fio_dif_digest -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:33:58.717 "params": { 00:33:58.717 "name": "Nvme0", 00:33:58.717 "trtype": "tcp", 00:33:58.717 "traddr": "10.0.0.2", 00:33:58.717 "adrfam": "ipv4", 00:33:58.717 "trsvcid": "4420", 00:33:58.717 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:58.717 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:58.717 "hdgst": true, 00:33:58.717 "ddgst": true 00:33:58.717 }, 00:33:58.717 "method": "bdev_nvme_attach_controller" 00:33:58.717 }' 00:33:59.003 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:59.003 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:59.003 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:59.003 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:59.003 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:59.003 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:59.003 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:59.003 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:59.003 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:59.003 09:18:14 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:59.265 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:33:59.265 ... 00:33:59.265 fio-3.35 00:33:59.265 Starting 3 threads 00:34:11.458 00:34:11.458 filename0: (groupid=0, jobs=1): err= 0: pid=2603062: Wed Nov 20 09:18:25 2024 00:34:11.458 read: IOPS=302, BW=37.8MiB/s (39.6MB/s)(379MiB/10047msec) 00:34:11.458 slat (usec): min=6, max=107, avg=20.93, stdev= 5.82 00:34:11.458 clat (usec): min=7004, max=48944, avg=9893.60, stdev=1195.58 00:34:11.458 lat (usec): min=7017, max=48966, avg=9914.53, stdev=1195.42 00:34:11.458 clat percentiles (usec): 00:34:11.458 | 1.00th=[ 8094], 5.00th=[ 8848], 10.00th=[ 8979], 20.00th=[ 9372], 00:34:11.458 | 30.00th=[ 9503], 40.00th=[ 9765], 50.00th=[ 9896], 60.00th=[10028], 00:34:11.458 | 70.00th=[10290], 80.00th=[10421], 90.00th=[10683], 95.00th=[10945], 00:34:11.458 | 99.00th=[11469], 99.50th=[11731], 99.90th=[11994], 99.95th=[47449], 00:34:11.458 | 99.99th=[49021] 00:34:11.458 bw ( KiB/s): min=38144, max=39936, per=36.04%, avg=38822.40, stdev=486.26, samples=20 00:34:11.458 iops : min= 298, max= 312, avg=303.30, stdev= 3.80, samples=20 00:34:11.458 lat (msec) : 10=56.28%, 20=43.66%, 50=0.07% 00:34:11.458 cpu : usr=96.66%, sys=3.03%, ctx=19, majf=0, minf=125 00:34:11.458 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:11.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.458 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.458 issued rwts: total=3035,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.458 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:11.458 filename0: (groupid=0, jobs=1): err= 0: pid=2603063: Wed Nov 20 09:18:25 2024 00:34:11.458 read: IOPS=266, BW=33.3MiB/s (34.9MB/s)(335MiB/10045msec) 00:34:11.458 slat (nsec): min=6485, max=49127, avg=18043.45, stdev=7403.67 00:34:11.458 clat (usec): min=6631, max=51384, avg=11221.05, stdev=1284.69 00:34:11.458 lat (usec): min=6656, max=51412, avg=11239.09, stdev=1284.83 00:34:11.458 clat percentiles (usec): 00:34:11.458 | 1.00th=[ 9241], 5.00th=[10028], 10.00th=[10290], 20.00th=[10683], 00:34:11.458 | 30.00th=[10814], 40.00th=[11076], 50.00th=[11207], 60.00th=[11338], 00:34:11.458 | 70.00th=[11600], 80.00th=[11731], 90.00th=[12125], 95.00th=[12387], 00:34:11.458 | 99.00th=[12911], 99.50th=[13173], 99.90th=[13960], 99.95th=[46924], 00:34:11.458 | 99.99th=[51643] 00:34:11.458 bw ( KiB/s): min=33280, max=36352, per=31.79%, avg=34240.00, stdev=663.81, samples=20 00:34:11.458 iops : min= 260, max= 284, avg=267.50, stdev= 5.19, samples=20 00:34:11.458 lat (msec) : 10=3.96%, 20=95.97%, 50=0.04%, 100=0.04% 00:34:11.458 cpu : usr=96.00%, sys=3.68%, ctx=31, majf=0, minf=56 00:34:11.458 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:11.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.458 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.458 issued rwts: total=2677,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.458 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:11.458 filename0: (groupid=0, jobs=1): err= 0: pid=2603064: Wed Nov 20 09:18:25 2024 00:34:11.458 read: IOPS=272, BW=34.1MiB/s (35.8MB/s)(343MiB/10045msec) 00:34:11.458 slat (nsec): min=6310, max=43763, avg=18252.78, stdev=7463.74 00:34:11.458 clat (usec): min=8447, max=52971, avg=10954.91, stdev=1846.71 00:34:11.458 lat (usec): min=8455, max=52985, avg=10973.16, stdev=1846.64 00:34:11.458 clat percentiles (usec): 00:34:11.458 | 1.00th=[ 9110], 5.00th=[ 9765], 10.00th=[10028], 20.00th=[10290], 00:34:11.458 | 30.00th=[10552], 40.00th=[10683], 50.00th=[10814], 60.00th=[11076], 00:34:11.458 | 70.00th=[11207], 80.00th=[11469], 90.00th=[11731], 95.00th=[12125], 00:34:11.458 | 99.00th=[12649], 99.50th=[12911], 99.90th=[53216], 99.95th=[53216], 00:34:11.458 | 99.99th=[53216] 00:34:11.458 bw ( KiB/s): min=32256, max=36352, per=32.56%, avg=35072.00, stdev=805.27, samples=20 00:34:11.458 iops : min= 252, max= 284, avg=274.00, stdev= 6.29, samples=20 00:34:11.458 lat (msec) : 10=10.18%, 20=89.64%, 50=0.07%, 100=0.11% 00:34:11.458 cpu : usr=96.17%, sys=3.51%, ctx=15, majf=0, minf=46 00:34:11.458 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:11.458 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.458 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:11.458 issued rwts: total=2742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:11.458 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:11.458 00:34:11.458 Run status group 0 (all jobs): 00:34:11.458 READ: bw=105MiB/s (110MB/s), 33.3MiB/s-37.8MiB/s (34.9MB/s-39.6MB/s), io=1057MiB (1108MB), run=10045-10047msec 00:34:11.458 09:18:25 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:11.458 09:18:25 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:11.458 09:18:25 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:11.458 09:18:25 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:11.458 09:18:25 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:11.458 09:18:25 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:11.458 09:18:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.458 09:18:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:11.458 09:18:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.458 09:18:25 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:11.458 09:18:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:11.458 09:18:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:11.458 09:18:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:11.458 00:34:11.458 real 0m11.237s 00:34:11.458 user 0m35.467s 00:34:11.458 sys 0m1.346s 00:34:11.458 09:18:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:11.458 09:18:25 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:11.458 ************************************ 00:34:11.458 END TEST fio_dif_digest 00:34:11.458 ************************************ 00:34:11.458 09:18:25 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:11.458 09:18:25 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:11.458 09:18:25 nvmf_dif -- nvmf/common.sh@335 -- # nvmfcleanup 00:34:11.458 09:18:25 nvmf_dif -- nvmf/common.sh@99 -- # sync 00:34:11.458 09:18:25 nvmf_dif -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:34:11.458 09:18:25 nvmf_dif -- nvmf/common.sh@102 -- # set +e 00:34:11.458 09:18:25 nvmf_dif -- nvmf/common.sh@103 -- # for i in {1..20} 00:34:11.458 09:18:25 nvmf_dif -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:34:11.458 rmmod nvme_tcp 00:34:11.458 rmmod nvme_fabrics 00:34:11.458 rmmod nvme_keyring 00:34:11.458 09:18:26 nvmf_dif -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:34:11.458 09:18:26 nvmf_dif -- nvmf/common.sh@106 -- # set -e 00:34:11.458 09:18:26 nvmf_dif -- nvmf/common.sh@107 -- # return 0 00:34:11.458 09:18:26 nvmf_dif -- nvmf/common.sh@336 -- # '[' -n 2594166 ']' 00:34:11.458 09:18:26 nvmf_dif -- nvmf/common.sh@337 -- # killprocess 2594166 00:34:11.458 09:18:26 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 2594166 ']' 00:34:11.458 09:18:26 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 2594166 00:34:11.458 09:18:26 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:34:11.458 09:18:26 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:11.458 09:18:26 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2594166 00:34:11.458 09:18:26 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:11.458 09:18:26 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:11.458 09:18:26 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2594166' 00:34:11.458 killing process with pid 2594166 00:34:11.458 09:18:26 nvmf_dif -- common/autotest_common.sh@973 -- # kill 2594166 00:34:11.458 09:18:26 nvmf_dif -- common/autotest_common.sh@978 -- # wait 2594166 00:34:11.458 09:18:26 nvmf_dif -- nvmf/common.sh@339 -- # '[' iso == iso ']' 00:34:11.458 09:18:26 nvmf_dif -- nvmf/common.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:13.361 Waiting for block devices as requested 00:34:13.361 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:13.361 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:13.361 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:13.361 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:13.361 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:13.619 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:13.619 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:13.619 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:13.877 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:13.877 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:13.877 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:13.877 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:14.136 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:14.136 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:14.136 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:14.395 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:14.395 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:14.395 09:18:30 nvmf_dif -- nvmf/common.sh@342 -- # nvmf_fini 00:34:14.395 09:18:30 nvmf_dif -- nvmf/setup.sh@264 -- # local dev 00:34:14.395 09:18:30 nvmf_dif -- nvmf/setup.sh@267 -- # remove_target_ns 00:34:14.395 09:18:30 nvmf_dif -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:14.395 09:18:30 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:34:14.395 09:18:30 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:16.932 09:18:32 nvmf_dif -- nvmf/setup.sh@268 -- # delete_main_bridge 00:34:16.932 09:18:32 nvmf_dif -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:34:16.932 09:18:32 nvmf_dif -- nvmf/setup.sh@130 -- # return 0 00:34:16.932 09:18:32 nvmf_dif -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:34:16.932 09:18:32 nvmf_dif -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:34:16.932 09:18:32 nvmf_dif -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:34:16.932 09:18:32 nvmf_dif -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:34:16.932 09:18:32 nvmf_dif -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:34:16.932 09:18:32 nvmf_dif -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:34:16.932 09:18:32 nvmf_dif -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:34:16.932 09:18:32 nvmf_dif -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:34:16.932 09:18:32 nvmf_dif -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:34:16.932 09:18:32 nvmf_dif -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:34:16.932 09:18:32 nvmf_dif -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:34:16.932 09:18:32 nvmf_dif -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:34:16.932 09:18:32 nvmf_dif -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:34:16.932 09:18:32 nvmf_dif -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:34:16.932 09:18:32 nvmf_dif -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:34:16.932 09:18:32 nvmf_dif -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:34:16.932 09:18:32 nvmf_dif -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:34:16.932 09:18:32 nvmf_dif -- nvmf/setup.sh@41 -- # _dev=0 00:34:16.932 09:18:32 nvmf_dif -- nvmf/setup.sh@41 -- # dev_map=() 00:34:16.932 09:18:32 nvmf_dif -- nvmf/setup.sh@284 -- # iptr 00:34:16.932 09:18:32 nvmf_dif -- nvmf/common.sh@542 -- # iptables-save 00:34:16.932 09:18:32 nvmf_dif -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:34:16.932 09:18:32 nvmf_dif -- nvmf/common.sh@542 -- # iptables-restore 00:34:16.932 00:34:16.932 real 1m14.592s 00:34:16.932 user 7m10.114s 00:34:16.932 sys 0m19.845s 00:34:16.932 09:18:32 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:16.932 09:18:32 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:16.932 ************************************ 00:34:16.932 END TEST nvmf_dif 00:34:16.932 ************************************ 00:34:16.932 09:18:32 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:16.932 09:18:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:16.932 09:18:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:16.932 09:18:32 -- common/autotest_common.sh@10 -- # set +x 00:34:16.932 ************************************ 00:34:16.932 START TEST nvmf_abort_qd_sizes 00:34:16.932 ************************************ 00:34:16.932 09:18:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:16.932 * Looking for test storage... 00:34:16.932 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:16.932 09:18:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:16.932 09:18:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:34:16.932 09:18:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:16.932 09:18:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:16.932 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:16.932 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:16.932 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:16.932 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:16.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.933 --rc genhtml_branch_coverage=1 00:34:16.933 --rc genhtml_function_coverage=1 00:34:16.933 --rc genhtml_legend=1 00:34:16.933 --rc geninfo_all_blocks=1 00:34:16.933 --rc geninfo_unexecuted_blocks=1 00:34:16.933 00:34:16.933 ' 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:16.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.933 --rc genhtml_branch_coverage=1 00:34:16.933 --rc genhtml_function_coverage=1 00:34:16.933 --rc genhtml_legend=1 00:34:16.933 --rc geninfo_all_blocks=1 00:34:16.933 --rc geninfo_unexecuted_blocks=1 00:34:16.933 00:34:16.933 ' 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:16.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.933 --rc genhtml_branch_coverage=1 00:34:16.933 --rc genhtml_function_coverage=1 00:34:16.933 --rc genhtml_legend=1 00:34:16.933 --rc geninfo_all_blocks=1 00:34:16.933 --rc geninfo_unexecuted_blocks=1 00:34:16.933 00:34:16.933 ' 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:16.933 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:16.933 --rc genhtml_branch_coverage=1 00:34:16.933 --rc genhtml_function_coverage=1 00:34:16.933 --rc genhtml_legend=1 00:34:16.933 --rc geninfo_all_blocks=1 00:34:16.933 --rc geninfo_unexecuted_blocks=1 00:34:16.933 00:34:16.933 ' 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@50 -- # : 0 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:34:16.933 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@54 -- # have_pci_nics=0 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # prepare_net_devs 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # local -g is_hw=no 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # remove_target_ns 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # xtrace_disable 00:34:16.933 09:18:32 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@131 -- # pci_devs=() 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@131 -- # local -a pci_devs 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@132 -- # pci_net_devs=() 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@133 -- # pci_drivers=() 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@133 -- # local -A pci_drivers 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@135 -- # net_devs=() 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@135 -- # local -ga net_devs 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@136 -- # e810=() 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@136 -- # local -ga e810 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@137 -- # x722=() 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@137 -- # local -ga x722 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@138 -- # mlx=() 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@138 -- # local -ga mlx 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:34:23.503 Found 0000:86:00.0 (0x8086 - 0x159b) 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:34:23.503 Found 0000:86:00.1 (0x8086 - 0x159b) 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # [[ up == up ]] 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:34:23.503 Found net devices under 0000:86:00.0: cvl_0_0 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # [[ up == up ]] 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:34:23.503 Found net devices under 0000:86:00.1: cvl_0_1 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # is_hw=yes 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@257 -- # create_target_ns 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@27 -- # local -gA dev_map 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@28 -- # local -g _dev 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # ips=() 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772161 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:34:23.503 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:34:23.503 10.0.0.1 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772162 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:34:23.504 10.0.0.2 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@38 -- # ping_ips 1 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@107 -- # local dev=initiator0 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:34:23.504 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:34:23.504 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.473 ms 00:34:23.504 00:34:23.504 --- 10.0.0.1 ping statistics --- 00:34:23.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:23.504 rtt min/avg/max/mdev = 0.473/0.473/0.473/0.000 ms 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # get_net_dev target0 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@107 -- # local dev=target0 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:34:23.504 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:34:23.504 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:34:23.504 00:34:23.504 --- 10.0.0.2 ping statistics --- 00:34:23.504 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:34:23.504 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # (( pair++ )) 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # return 0 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # '[' iso == iso ']' 00:34:23.504 09:18:38 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:25.408 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:25.408 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:25.408 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:25.408 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:25.408 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:25.408 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:25.408 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:25.666 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:25.666 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:25.666 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:25.666 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:25.666 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:25.666 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:25.666 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:25.666 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:25.666 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:26.605 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@107 -- # local dev=initiator0 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@107 -- # local dev=initiator1 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # return 1 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # dev= 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@169 -- # return 0 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # get_net_dev target0 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@107 -- # local dev=target0 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # get_net_dev target1 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@107 -- # local dev=target1 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # return 1 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # dev= 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@169 -- # return 0 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # nvmfpid=2611131 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # waitforlisten 2611131 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 2611131 ']' 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:34:26.605 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:34:26.605 09:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:26.605 [2024-11-20 09:18:42.593591] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:34:26.606 [2024-11-20 09:18:42.593640] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:34:26.864 [2024-11-20 09:18:42.672472] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:34:26.864 [2024-11-20 09:18:42.716335] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:34:26.864 [2024-11-20 09:18:42.716373] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:34:26.864 [2024-11-20 09:18:42.716381] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:34:26.865 [2024-11-20 09:18:42.716387] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:34:26.865 [2024-11-20 09:18:42.716393] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:34:26.865 [2024-11-20 09:18:42.717996] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:34:26.865 [2024-11-20 09:18:42.718104] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:34:26.865 [2024-11-20 09:18:42.718215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:34:26.865 [2024-11-20 09:18:42.718214] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:34:26.865 09:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:34:26.865 09:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:34:26.865 09:18:42 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:34:26.865 09:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:34:26.865 09:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:26.865 09:18:42 nvmf_abort_qd_sizes -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:34:26.865 09:18:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:34:26.865 09:18:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:34:26.865 09:18:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:34:26.865 09:18:42 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:34:26.865 09:18:42 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:34:26.865 09:18:42 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:34:26.865 09:18:42 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:34:26.865 09:18:42 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:34:26.865 09:18:42 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:34:26.865 09:18:42 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:34:26.865 09:18:42 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:34:26.865 09:18:42 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:34:26.865 09:18:42 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:34:26.865 09:18:42 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:34:26.865 09:18:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:34:26.865 09:18:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:34:26.865 09:18:42 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:34:26.865 09:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:26.865 09:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:26.865 09:18:42 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:26.865 ************************************ 00:34:26.865 START TEST spdk_target_abort 00:34:26.865 ************************************ 00:34:26.865 09:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:34:26.865 09:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:34:26.865 09:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:34:26.865 09:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:26.865 09:18:42 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:30.142 spdk_targetn1 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:30.143 [2024-11-20 09:18:45.731985] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:30.143 [2024-11-20 09:18:45.779230] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:30.143 09:18:45 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:33.525 Initializing NVMe Controllers 00:34:33.525 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:33.525 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:33.525 Initialization complete. Launching workers. 00:34:33.525 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 17302, failed: 0 00:34:33.525 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1331, failed to submit 15971 00:34:33.525 success 792, unsuccessful 539, failed 0 00:34:33.525 09:18:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:33.525 09:18:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:36.801 Initializing NVMe Controllers 00:34:36.802 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:36.802 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:36.802 Initialization complete. Launching workers. 00:34:36.802 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8635, failed: 0 00:34:36.802 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1263, failed to submit 7372 00:34:36.802 success 317, unsuccessful 946, failed 0 00:34:36.802 09:18:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:36.802 09:18:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:40.077 Initializing NVMe Controllers 00:34:40.077 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:34:40.077 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:40.077 Initialization complete. Launching workers. 00:34:40.077 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 37996, failed: 0 00:34:40.077 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2803, failed to submit 35193 00:34:40.077 success 575, unsuccessful 2228, failed 0 00:34:40.077 09:18:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:34:40.077 09:18:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.077 09:18:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:40.077 09:18:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:40.077 09:18:55 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:34:40.077 09:18:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:40.077 09:18:55 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:41.011 09:18:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:41.011 09:18:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2611131 00:34:41.011 09:18:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 2611131 ']' 00:34:41.011 09:18:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 2611131 00:34:41.011 09:18:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:34:41.011 09:18:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:41.011 09:18:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2611131 00:34:41.011 09:18:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:41.011 09:18:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:41.011 09:18:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2611131' 00:34:41.011 killing process with pid 2611131 00:34:41.011 09:18:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 2611131 00:34:41.011 09:18:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 2611131 00:34:41.011 00:34:41.011 real 0m14.008s 00:34:41.011 user 0m53.299s 00:34:41.011 sys 0m2.626s 00:34:41.011 09:18:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:41.011 09:18:56 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:41.011 ************************************ 00:34:41.011 END TEST spdk_target_abort 00:34:41.011 ************************************ 00:34:41.011 09:18:56 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:34:41.011 09:18:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:41.011 09:18:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:41.011 09:18:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:34:41.011 ************************************ 00:34:41.011 START TEST kernel_target_abort 00:34:41.011 ************************************ 00:34:41.011 09:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:34:41.011 09:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:34:41.011 09:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@434 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:34:41.011 09:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:34:41.011 09:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:41.011 09:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:41.011 09:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:34:41.011 09:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@441 -- # local block nvme 00:34:41.011 09:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:34:41.011 09:18:56 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@444 -- # modprobe nvmet 00:34:41.011 09:18:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:34:41.011 09:18:57 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@449 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:44.297 Waiting for block devices as requested 00:34:44.297 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:44.297 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:44.297 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:44.297 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:44.297 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:44.297 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:44.297 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:44.297 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:44.555 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:44.555 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:44.555 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:44.814 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:44.814 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:44.814 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:45.072 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:45.072 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:45.072 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:45.072 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:34:45.072 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:34:45.072 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:34:45.072 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:34:45.072 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:34:45.072 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:34:45.072 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:34:45.072 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:34:45.072 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:34:45.331 No valid GPT data, bailing 00:34:45.331 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:34:45.331 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:34:45.331 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:34:45.331 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:34:45.331 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@458 -- # [[ -b /dev/nvme0n1 ]] 00:34:45.331 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:45.331 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:45.331 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:34:45.331 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@467 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:34:45.331 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@469 -- # echo 1 00:34:45.331 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@470 -- # echo /dev/nvme0n1 00:34:45.331 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@471 -- # echo 1 00:34:45.331 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:34:45.331 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@474 -- # echo tcp 00:34:45.331 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@475 -- # echo 4420 00:34:45.331 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@476 -- # echo ipv4 00:34:45.331 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:34:45.331 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:34:45.331 00:34:45.331 Discovery Log Number of Records 2, Generation counter 2 00:34:45.331 =====Discovery Log Entry 0====== 00:34:45.331 trtype: tcp 00:34:45.331 adrfam: ipv4 00:34:45.331 subtype: current discovery subsystem 00:34:45.331 treq: not specified, sq flow control disable supported 00:34:45.331 portid: 1 00:34:45.331 trsvcid: 4420 00:34:45.331 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:34:45.331 traddr: 10.0.0.1 00:34:45.331 eflags: none 00:34:45.331 sectype: none 00:34:45.331 =====Discovery Log Entry 1====== 00:34:45.331 trtype: tcp 00:34:45.331 adrfam: ipv4 00:34:45.331 subtype: nvme subsystem 00:34:45.331 treq: not specified, sq flow control disable supported 00:34:45.331 portid: 1 00:34:45.331 trsvcid: 4420 00:34:45.331 subnqn: nqn.2016-06.io.spdk:testnqn 00:34:45.331 traddr: 10.0.0.1 00:34:45.331 eflags: none 00:34:45.331 sectype: none 00:34:45.331 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:34:45.331 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:34:45.331 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:34:45.331 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:34:45.331 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:34:45.331 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:34:45.331 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:34:45.331 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:34:45.331 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:34:45.332 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:45.332 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:34:45.332 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:45.332 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:34:45.332 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:45.332 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:34:45.332 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:45.332 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:34:45.332 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:34:45.332 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:45.332 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:45.332 09:19:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:48.611 Initializing NVMe Controllers 00:34:48.611 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:48.611 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:48.611 Initialization complete. Launching workers. 00:34:48.611 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 92931, failed: 0 00:34:48.611 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 92931, failed to submit 0 00:34:48.611 success 0, unsuccessful 92931, failed 0 00:34:48.611 09:19:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:48.611 09:19:04 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:51.890 Initializing NVMe Controllers 00:34:51.890 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:51.890 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:51.890 Initialization complete. Launching workers. 00:34:51.890 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 146538, failed: 0 00:34:51.890 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 36670, failed to submit 109868 00:34:51.890 success 0, unsuccessful 36670, failed 0 00:34:51.890 09:19:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:34:51.890 09:19:07 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:34:55.169 Initializing NVMe Controllers 00:34:55.169 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:34:55.169 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:34:55.169 Initialization complete. Launching workers. 00:34:55.169 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 138652, failed: 0 00:34:55.169 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 34734, failed to submit 103918 00:34:55.169 success 0, unsuccessful 34734, failed 0 00:34:55.169 09:19:10 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:34:55.169 09:19:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:34:55.169 09:19:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@488 -- # echo 0 00:34:55.169 09:19:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:55.169 09:19:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:34:55.169 09:19:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:34:55.169 09:19:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:34:55.169 09:19:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:34:55.169 09:19:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:34:55.169 09:19:10 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:34:57.701 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:57.701 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:57.701 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:57.701 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:57.701 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:57.701 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:57.701 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:57.701 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:57.701 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:34:57.701 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:34:57.701 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:34:57.701 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:34:57.701 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:34:57.701 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:34:57.701 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:34:57.701 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:34:58.637 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:34:58.637 00:34:58.637 real 0m17.516s 00:34:58.637 user 0m9.204s 00:34:58.637 sys 0m5.011s 00:34:58.637 09:19:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:58.637 09:19:14 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:34:58.637 ************************************ 00:34:58.637 END TEST kernel_target_abort 00:34:58.637 ************************************ 00:34:58.637 09:19:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:34:58.637 09:19:14 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:34:58.637 09:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # nvmfcleanup 00:34:58.637 09:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@99 -- # sync 00:34:58.637 09:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:34:58.637 09:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@102 -- # set +e 00:34:58.637 09:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@103 -- # for i in {1..20} 00:34:58.637 09:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:34:58.637 rmmod nvme_tcp 00:34:58.637 rmmod nvme_fabrics 00:34:58.637 rmmod nvme_keyring 00:34:58.637 09:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:34:58.637 09:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@106 -- # set -e 00:34:58.637 09:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@107 -- # return 0 00:34:58.637 09:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # '[' -n 2611131 ']' 00:34:58.637 09:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@337 -- # killprocess 2611131 00:34:58.637 09:19:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 2611131 ']' 00:34:58.637 09:19:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 2611131 00:34:58.637 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (2611131) - No such process 00:34:58.637 09:19:14 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 2611131 is not found' 00:34:58.637 Process with pid 2611131 is not found 00:34:58.637 09:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@339 -- # '[' iso == iso ']' 00:34:58.637 09:19:14 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:01.924 Waiting for block devices as requested 00:35:01.924 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:01.924 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:01.924 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:01.924 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:01.924 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:01.924 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:01.924 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:01.924 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:02.207 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:02.207 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:02.207 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:02.207 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:02.465 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:02.465 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:02.465 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:02.724 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:02.724 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:02.724 09:19:18 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # nvmf_fini 00:35:02.724 09:19:18 nvmf_abort_qd_sizes -- nvmf/setup.sh@264 -- # local dev 00:35:02.724 09:19:18 nvmf_abort_qd_sizes -- nvmf/setup.sh@267 -- # remove_target_ns 00:35:02.724 09:19:18 nvmf_abort_qd_sizes -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:02.724 09:19:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:35:02.724 09:19:18 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:05.257 09:19:20 nvmf_abort_qd_sizes -- nvmf/setup.sh@268 -- # delete_main_bridge 00:35:05.257 09:19:20 nvmf_abort_qd_sizes -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:35:05.257 09:19:20 nvmf_abort_qd_sizes -- nvmf/setup.sh@130 -- # return 0 00:35:05.257 09:19:20 nvmf_abort_qd_sizes -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:35:05.257 09:19:20 nvmf_abort_qd_sizes -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:35:05.257 09:19:20 nvmf_abort_qd_sizes -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:35:05.257 09:19:20 nvmf_abort_qd_sizes -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:35:05.257 09:19:20 nvmf_abort_qd_sizes -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:35:05.257 09:19:20 nvmf_abort_qd_sizes -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:35:05.257 09:19:20 nvmf_abort_qd_sizes -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:35:05.257 09:19:20 nvmf_abort_qd_sizes -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:35:05.257 09:19:20 nvmf_abort_qd_sizes -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:35:05.257 09:19:20 nvmf_abort_qd_sizes -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:35:05.257 09:19:20 nvmf_abort_qd_sizes -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:35:05.257 09:19:20 nvmf_abort_qd_sizes -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:35:05.257 09:19:20 nvmf_abort_qd_sizes -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:35:05.257 09:19:20 nvmf_abort_qd_sizes -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:35:05.257 09:19:20 nvmf_abort_qd_sizes -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:35:05.257 09:19:20 nvmf_abort_qd_sizes -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:35:05.257 09:19:20 nvmf_abort_qd_sizes -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:35:05.257 09:19:20 nvmf_abort_qd_sizes -- nvmf/setup.sh@41 -- # _dev=0 00:35:05.257 09:19:20 nvmf_abort_qd_sizes -- nvmf/setup.sh@41 -- # dev_map=() 00:35:05.257 09:19:20 nvmf_abort_qd_sizes -- nvmf/setup.sh@284 -- # iptr 00:35:05.257 09:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@542 -- # iptables-save 00:35:05.257 09:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:35:05.257 09:19:20 nvmf_abort_qd_sizes -- nvmf/common.sh@542 -- # iptables-restore 00:35:05.257 00:35:05.257 real 0m48.269s 00:35:05.257 user 1m6.883s 00:35:05.257 sys 0m16.450s 00:35:05.257 09:19:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:05.257 09:19:20 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:05.257 ************************************ 00:35:05.257 END TEST nvmf_abort_qd_sizes 00:35:05.257 ************************************ 00:35:05.257 09:19:20 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:05.257 09:19:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:05.257 09:19:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:05.257 09:19:20 -- common/autotest_common.sh@10 -- # set +x 00:35:05.257 ************************************ 00:35:05.257 START TEST keyring_file 00:35:05.257 ************************************ 00:35:05.257 09:19:20 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:05.257 * Looking for test storage... 00:35:05.257 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:05.257 09:19:20 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:05.257 09:19:20 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:35:05.257 09:19:20 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:05.257 09:19:21 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:05.257 09:19:21 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:05.257 09:19:21 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:05.257 09:19:21 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:05.257 09:19:21 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:05.257 09:19:21 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:05.257 09:19:21 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:05.257 09:19:21 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:05.257 09:19:21 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:05.257 09:19:21 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:05.257 09:19:21 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:05.257 09:19:21 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:05.257 09:19:21 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:05.257 09:19:21 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:05.257 09:19:21 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:05.257 09:19:21 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:05.257 09:19:21 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:05.257 09:19:21 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:05.257 09:19:21 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:05.257 09:19:21 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:05.257 09:19:21 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:05.257 09:19:21 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:05.257 09:19:21 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:05.257 09:19:21 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:05.257 09:19:21 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:05.257 09:19:21 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:05.257 09:19:21 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:05.257 09:19:21 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:05.257 09:19:21 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:05.257 09:19:21 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:05.257 09:19:21 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:05.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.257 --rc genhtml_branch_coverage=1 00:35:05.257 --rc genhtml_function_coverage=1 00:35:05.257 --rc genhtml_legend=1 00:35:05.257 --rc geninfo_all_blocks=1 00:35:05.257 --rc geninfo_unexecuted_blocks=1 00:35:05.257 00:35:05.257 ' 00:35:05.257 09:19:21 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:05.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.257 --rc genhtml_branch_coverage=1 00:35:05.257 --rc genhtml_function_coverage=1 00:35:05.257 --rc genhtml_legend=1 00:35:05.257 --rc geninfo_all_blocks=1 00:35:05.257 --rc geninfo_unexecuted_blocks=1 00:35:05.257 00:35:05.257 ' 00:35:05.257 09:19:21 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:05.257 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.257 --rc genhtml_branch_coverage=1 00:35:05.257 --rc genhtml_function_coverage=1 00:35:05.257 --rc genhtml_legend=1 00:35:05.257 --rc geninfo_all_blocks=1 00:35:05.257 --rc geninfo_unexecuted_blocks=1 00:35:05.257 00:35:05.257 ' 00:35:05.258 09:19:21 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:05.258 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:05.258 --rc genhtml_branch_coverage=1 00:35:05.258 --rc genhtml_function_coverage=1 00:35:05.258 --rc genhtml_legend=1 00:35:05.258 --rc geninfo_all_blocks=1 00:35:05.258 --rc geninfo_unexecuted_blocks=1 00:35:05.258 00:35:05.258 ' 00:35:05.258 09:19:21 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:05.258 09:19:21 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:05.258 09:19:21 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:05.258 09:19:21 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:05.258 09:19:21 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:05.258 09:19:21 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:05.258 09:19:21 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.258 09:19:21 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.258 09:19:21 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.258 09:19:21 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:05.258 09:19:21 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:35:05.258 09:19:21 keyring_file -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:35:05.258 09:19:21 keyring_file -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:05.258 09:19:21 keyring_file -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@50 -- # : 0 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:35:05.258 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@54 -- # have_pci_nics=0 00:35:05.258 09:19:21 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:05.258 09:19:21 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:05.258 09:19:21 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:05.258 09:19:21 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:05.258 09:19:21 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:05.258 09:19:21 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:05.258 09:19:21 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:05.258 09:19:21 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:05.258 09:19:21 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:05.258 09:19:21 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:05.258 09:19:21 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:05.258 09:19:21 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:05.258 09:19:21 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.gLgKGxC8am 00:35:05.258 09:19:21 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@507 -- # python - 00:35:05.258 09:19:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.gLgKGxC8am 00:35:05.258 09:19:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.gLgKGxC8am 00:35:05.258 09:19:21 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.gLgKGxC8am 00:35:05.258 09:19:21 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:05.258 09:19:21 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:05.258 09:19:21 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:05.258 09:19:21 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:05.258 09:19:21 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:05.258 09:19:21 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:05.258 09:19:21 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.ppYaOsuDUO 00:35:05.258 09:19:21 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@506 -- # key=112233445566778899aabbccddeeff00 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:35:05.258 09:19:21 keyring_file -- nvmf/common.sh@507 -- # python - 00:35:05.258 09:19:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.ppYaOsuDUO 00:35:05.258 09:19:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.ppYaOsuDUO 00:35:05.258 09:19:21 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.ppYaOsuDUO 00:35:05.258 09:19:21 keyring_file -- keyring/file.sh@30 -- # tgtpid=2619870 00:35:05.258 09:19:21 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:05.258 09:19:21 keyring_file -- keyring/file.sh@32 -- # waitforlisten 2619870 00:35:05.258 09:19:21 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2619870 ']' 00:35:05.258 09:19:21 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:05.258 09:19:21 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:05.258 09:19:21 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:05.258 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:05.258 09:19:21 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:05.258 09:19:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:05.258 [2024-11-20 09:19:21.212389] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:35:05.258 [2024-11-20 09:19:21.212439] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2619870 ] 00:35:05.258 [2024-11-20 09:19:21.288916] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:05.517 [2024-11-20 09:19:21.332353] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:05.517 09:19:21 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:05.517 09:19:21 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:05.517 09:19:21 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:05.517 09:19:21 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.517 09:19:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:05.517 [2024-11-20 09:19:21.550501] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:05.776 null0 00:35:05.776 [2024-11-20 09:19:21.582559] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:05.776 [2024-11-20 09:19:21.582862] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:05.776 09:19:21 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:05.776 09:19:21 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:05.776 09:19:21 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:05.776 09:19:21 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:05.776 09:19:21 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:05.776 09:19:21 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:05.776 09:19:21 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:05.776 09:19:21 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:05.776 09:19:21 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:05.776 09:19:21 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:05.776 09:19:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:05.776 [2024-11-20 09:19:21.610620] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:05.776 request: 00:35:05.776 { 00:35:05.776 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:05.776 "secure_channel": false, 00:35:05.776 "listen_address": { 00:35:05.776 "trtype": "tcp", 00:35:05.776 "traddr": "127.0.0.1", 00:35:05.776 "trsvcid": "4420" 00:35:05.776 }, 00:35:05.776 "method": "nvmf_subsystem_add_listener", 00:35:05.776 "req_id": 1 00:35:05.776 } 00:35:05.776 Got JSON-RPC error response 00:35:05.777 response: 00:35:05.777 { 00:35:05.777 "code": -32602, 00:35:05.777 "message": "Invalid parameters" 00:35:05.777 } 00:35:05.777 09:19:21 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:05.777 09:19:21 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:05.777 09:19:21 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:05.777 09:19:21 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:05.777 09:19:21 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:05.777 09:19:21 keyring_file -- keyring/file.sh@47 -- # bperfpid=2619887 00:35:05.777 09:19:21 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:05.777 09:19:21 keyring_file -- keyring/file.sh@49 -- # waitforlisten 2619887 /var/tmp/bperf.sock 00:35:05.777 09:19:21 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2619887 ']' 00:35:05.777 09:19:21 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:05.777 09:19:21 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:05.777 09:19:21 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:05.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:05.777 09:19:21 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:05.777 09:19:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:05.777 [2024-11-20 09:19:21.663240] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:35:05.777 [2024-11-20 09:19:21.663283] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2619887 ] 00:35:05.777 [2024-11-20 09:19:21.735319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:05.777 [2024-11-20 09:19:21.778421] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:06.036 09:19:21 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:06.036 09:19:21 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:06.036 09:19:21 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gLgKGxC8am 00:35:06.036 09:19:21 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gLgKGxC8am 00:35:06.036 09:19:22 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ppYaOsuDUO 00:35:06.036 09:19:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ppYaOsuDUO 00:35:06.294 09:19:22 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:06.294 09:19:22 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:06.294 09:19:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:06.294 09:19:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:06.294 09:19:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:06.553 09:19:22 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.gLgKGxC8am == \/\t\m\p\/\t\m\p\.\g\L\g\K\G\x\C\8\a\m ]] 00:35:06.553 09:19:22 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:06.553 09:19:22 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:06.553 09:19:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:06.553 09:19:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:06.553 09:19:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:06.811 09:19:22 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.ppYaOsuDUO == \/\t\m\p\/\t\m\p\.\p\p\Y\a\O\s\u\D\U\O ]] 00:35:06.811 09:19:22 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:06.811 09:19:22 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:06.811 09:19:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:06.811 09:19:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:06.811 09:19:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:06.811 09:19:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:07.068 09:19:22 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:07.068 09:19:22 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:07.068 09:19:22 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:07.068 09:19:22 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:07.068 09:19:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:07.068 09:19:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:07.068 09:19:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:07.068 09:19:23 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:07.068 09:19:23 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:07.068 09:19:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:07.328 [2024-11-20 09:19:23.228846] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:07.328 nvme0n1 00:35:07.328 09:19:23 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:07.328 09:19:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:07.328 09:19:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:07.328 09:19:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:07.328 09:19:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:07.328 09:19:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:07.587 09:19:23 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:07.587 09:19:23 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:07.587 09:19:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:07.587 09:19:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:07.587 09:19:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:07.587 09:19:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:07.587 09:19:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:07.845 09:19:23 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:07.845 09:19:23 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:07.845 Running I/O for 1 seconds... 00:35:09.039 18827.00 IOPS, 73.54 MiB/s 00:35:09.039 Latency(us) 00:35:09.039 [2024-11-20T08:19:25.080Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:09.039 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:09.039 nvme0n1 : 1.00 18867.98 73.70 0.00 0.00 6771.20 3077.34 14588.88 00:35:09.039 [2024-11-20T08:19:25.080Z] =================================================================================================================== 00:35:09.039 [2024-11-20T08:19:25.080Z] Total : 18867.98 73.70 0.00 0.00 6771.20 3077.34 14588.88 00:35:09.039 { 00:35:09.039 "results": [ 00:35:09.039 { 00:35:09.039 "job": "nvme0n1", 00:35:09.039 "core_mask": "0x2", 00:35:09.039 "workload": "randrw", 00:35:09.039 "percentage": 50, 00:35:09.039 "status": "finished", 00:35:09.039 "queue_depth": 128, 00:35:09.039 "io_size": 4096, 00:35:09.039 "runtime": 1.004665, 00:35:09.039 "iops": 18867.980869244973, 00:35:09.039 "mibps": 73.70305027048818, 00:35:09.039 "io_failed": 0, 00:35:09.039 "io_timeout": 0, 00:35:09.039 "avg_latency_us": 6771.200915988514, 00:35:09.039 "min_latency_us": 3077.342608695652, 00:35:09.039 "max_latency_us": 14588.88347826087 00:35:09.039 } 00:35:09.039 ], 00:35:09.039 "core_count": 1 00:35:09.039 } 00:35:09.039 09:19:24 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:09.039 09:19:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:09.039 09:19:25 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:09.039 09:19:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:09.039 09:19:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:09.039 09:19:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:09.039 09:19:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:09.039 09:19:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:09.297 09:19:25 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:09.297 09:19:25 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:09.297 09:19:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:09.297 09:19:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:09.297 09:19:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:09.297 09:19:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:09.297 09:19:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:09.555 09:19:25 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:09.555 09:19:25 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:09.555 09:19:25 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:09.555 09:19:25 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:09.555 09:19:25 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:09.555 09:19:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:09.555 09:19:25 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:09.555 09:19:25 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:09.555 09:19:25 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:09.555 09:19:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:09.814 [2024-11-20 09:19:25.614809] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:09.814 [2024-11-20 09:19:25.615105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19abd00 (107): Transport endpoint is not connected 00:35:09.814 [2024-11-20 09:19:25.616099] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19abd00 (9): Bad file descriptor 00:35:09.814 [2024-11-20 09:19:25.617101] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:09.814 [2024-11-20 09:19:25.617112] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:09.814 [2024-11-20 09:19:25.617119] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:09.814 [2024-11-20 09:19:25.617128] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:09.814 request: 00:35:09.814 { 00:35:09.814 "name": "nvme0", 00:35:09.814 "trtype": "tcp", 00:35:09.814 "traddr": "127.0.0.1", 00:35:09.814 "adrfam": "ipv4", 00:35:09.814 "trsvcid": "4420", 00:35:09.814 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:09.814 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:09.814 "prchk_reftag": false, 00:35:09.814 "prchk_guard": false, 00:35:09.814 "hdgst": false, 00:35:09.814 "ddgst": false, 00:35:09.814 "psk": "key1", 00:35:09.814 "allow_unrecognized_csi": false, 00:35:09.814 "method": "bdev_nvme_attach_controller", 00:35:09.814 "req_id": 1 00:35:09.814 } 00:35:09.814 Got JSON-RPC error response 00:35:09.814 response: 00:35:09.814 { 00:35:09.814 "code": -5, 00:35:09.814 "message": "Input/output error" 00:35:09.814 } 00:35:09.814 09:19:25 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:09.814 09:19:25 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:09.814 09:19:25 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:09.814 09:19:25 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:09.814 09:19:25 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:09.814 09:19:25 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:09.814 09:19:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:09.814 09:19:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:09.814 09:19:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:09.815 09:19:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:09.815 09:19:25 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:09.815 09:19:25 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:10.073 09:19:25 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:10.073 09:19:25 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:10.073 09:19:25 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:10.073 09:19:25 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:10.073 09:19:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:10.073 09:19:26 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:10.073 09:19:26 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:10.073 09:19:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:10.331 09:19:26 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:10.331 09:19:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:10.590 09:19:26 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:10.590 09:19:26 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:10.590 09:19:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:10.590 09:19:26 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:10.590 09:19:26 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.gLgKGxC8am 00:35:10.590 09:19:26 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.gLgKGxC8am 00:35:10.590 09:19:26 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:10.590 09:19:26 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.gLgKGxC8am 00:35:10.590 09:19:26 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:10.590 09:19:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:10.590 09:19:26 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:10.590 09:19:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:10.590 09:19:26 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gLgKGxC8am 00:35:10.590 09:19:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gLgKGxC8am 00:35:10.848 [2024-11-20 09:19:26.803267] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.gLgKGxC8am': 0100660 00:35:10.848 [2024-11-20 09:19:26.803294] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:10.848 request: 00:35:10.848 { 00:35:10.848 "name": "key0", 00:35:10.848 "path": "/tmp/tmp.gLgKGxC8am", 00:35:10.848 "method": "keyring_file_add_key", 00:35:10.848 "req_id": 1 00:35:10.848 } 00:35:10.848 Got JSON-RPC error response 00:35:10.848 response: 00:35:10.848 { 00:35:10.848 "code": -1, 00:35:10.848 "message": "Operation not permitted" 00:35:10.848 } 00:35:10.848 09:19:26 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:10.848 09:19:26 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:10.848 09:19:26 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:10.848 09:19:26 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:10.848 09:19:26 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.gLgKGxC8am 00:35:10.848 09:19:26 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.gLgKGxC8am 00:35:10.848 09:19:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.gLgKGxC8am 00:35:11.107 09:19:27 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.gLgKGxC8am 00:35:11.107 09:19:27 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:11.107 09:19:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:11.107 09:19:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:11.107 09:19:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:11.107 09:19:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:11.107 09:19:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:11.366 09:19:27 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:11.366 09:19:27 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:11.366 09:19:27 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:11.366 09:19:27 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:11.366 09:19:27 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:11.366 09:19:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:11.366 09:19:27 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:11.366 09:19:27 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:11.366 09:19:27 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:11.366 09:19:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:11.366 [2024-11-20 09:19:27.392836] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.gLgKGxC8am': No such file or directory 00:35:11.366 [2024-11-20 09:19:27.392858] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:11.366 [2024-11-20 09:19:27.392874] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:11.366 [2024-11-20 09:19:27.392881] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:11.366 [2024-11-20 09:19:27.392888] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:11.366 [2024-11-20 09:19:27.392893] bdev_nvme.c:6763:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:11.366 request: 00:35:11.366 { 00:35:11.366 "name": "nvme0", 00:35:11.366 "trtype": "tcp", 00:35:11.366 "traddr": "127.0.0.1", 00:35:11.366 "adrfam": "ipv4", 00:35:11.366 "trsvcid": "4420", 00:35:11.366 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:11.366 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:11.366 "prchk_reftag": false, 00:35:11.366 "prchk_guard": false, 00:35:11.366 "hdgst": false, 00:35:11.366 "ddgst": false, 00:35:11.366 "psk": "key0", 00:35:11.366 "allow_unrecognized_csi": false, 00:35:11.366 "method": "bdev_nvme_attach_controller", 00:35:11.366 "req_id": 1 00:35:11.366 } 00:35:11.366 Got JSON-RPC error response 00:35:11.366 response: 00:35:11.366 { 00:35:11.366 "code": -19, 00:35:11.366 "message": "No such device" 00:35:11.366 } 00:35:11.628 09:19:27 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:11.628 09:19:27 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:11.628 09:19:27 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:11.628 09:19:27 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:11.628 09:19:27 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:11.628 09:19:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:11.628 09:19:27 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:11.628 09:19:27 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:11.628 09:19:27 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:11.628 09:19:27 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:11.628 09:19:27 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:11.628 09:19:27 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:11.628 09:19:27 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.CMMrnfVcoS 00:35:11.628 09:19:27 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:11.628 09:19:27 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:11.628 09:19:27 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:35:11.628 09:19:27 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:35:11.628 09:19:27 keyring_file -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:35:11.628 09:19:27 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:35:11.628 09:19:27 keyring_file -- nvmf/common.sh@507 -- # python - 00:35:11.628 09:19:27 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.CMMrnfVcoS 00:35:11.628 09:19:27 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.CMMrnfVcoS 00:35:11.628 09:19:27 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.CMMrnfVcoS 00:35:11.628 09:19:27 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CMMrnfVcoS 00:35:11.628 09:19:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CMMrnfVcoS 00:35:11.884 09:19:27 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:11.885 09:19:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:12.142 nvme0n1 00:35:12.142 09:19:28 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:12.142 09:19:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:12.142 09:19:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:12.142 09:19:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:12.142 09:19:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:12.142 09:19:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:12.401 09:19:28 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:12.401 09:19:28 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:12.401 09:19:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:12.660 09:19:28 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:12.660 09:19:28 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:12.660 09:19:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:12.660 09:19:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:12.660 09:19:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:12.918 09:19:28 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:12.918 09:19:28 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:12.918 09:19:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:12.918 09:19:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:12.918 09:19:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:12.918 09:19:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:12.918 09:19:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:12.918 09:19:28 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:12.918 09:19:28 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:12.918 09:19:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:13.176 09:19:29 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:13.176 09:19:29 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:13.176 09:19:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:13.435 09:19:29 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:13.435 09:19:29 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.CMMrnfVcoS 00:35:13.435 09:19:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.CMMrnfVcoS 00:35:13.693 09:19:29 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.ppYaOsuDUO 00:35:13.693 09:19:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.ppYaOsuDUO 00:35:13.693 09:19:29 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:13.693 09:19:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:13.951 nvme0n1 00:35:13.951 09:19:29 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:13.951 09:19:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:14.518 09:19:30 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:14.518 "subsystems": [ 00:35:14.518 { 00:35:14.518 "subsystem": "keyring", 00:35:14.518 "config": [ 00:35:14.518 { 00:35:14.518 "method": "keyring_file_add_key", 00:35:14.518 "params": { 00:35:14.518 "name": "key0", 00:35:14.518 "path": "/tmp/tmp.CMMrnfVcoS" 00:35:14.518 } 00:35:14.518 }, 00:35:14.518 { 00:35:14.518 "method": "keyring_file_add_key", 00:35:14.518 "params": { 00:35:14.518 "name": "key1", 00:35:14.518 "path": "/tmp/tmp.ppYaOsuDUO" 00:35:14.518 } 00:35:14.518 } 00:35:14.518 ] 00:35:14.518 }, 00:35:14.518 { 00:35:14.518 "subsystem": "iobuf", 00:35:14.518 "config": [ 00:35:14.518 { 00:35:14.518 "method": "iobuf_set_options", 00:35:14.518 "params": { 00:35:14.518 "small_pool_count": 8192, 00:35:14.518 "large_pool_count": 1024, 00:35:14.518 "small_bufsize": 8192, 00:35:14.518 "large_bufsize": 135168, 00:35:14.518 "enable_numa": false 00:35:14.518 } 00:35:14.518 } 00:35:14.518 ] 00:35:14.518 }, 00:35:14.518 { 00:35:14.518 "subsystem": "sock", 00:35:14.518 "config": [ 00:35:14.518 { 00:35:14.518 "method": "sock_set_default_impl", 00:35:14.518 "params": { 00:35:14.518 "impl_name": "posix" 00:35:14.518 } 00:35:14.518 }, 00:35:14.518 { 00:35:14.518 "method": "sock_impl_set_options", 00:35:14.518 "params": { 00:35:14.518 "impl_name": "ssl", 00:35:14.518 "recv_buf_size": 4096, 00:35:14.518 "send_buf_size": 4096, 00:35:14.518 "enable_recv_pipe": true, 00:35:14.518 "enable_quickack": false, 00:35:14.518 "enable_placement_id": 0, 00:35:14.518 "enable_zerocopy_send_server": true, 00:35:14.518 "enable_zerocopy_send_client": false, 00:35:14.518 "zerocopy_threshold": 0, 00:35:14.518 "tls_version": 0, 00:35:14.518 "enable_ktls": false 00:35:14.518 } 00:35:14.518 }, 00:35:14.518 { 00:35:14.518 "method": "sock_impl_set_options", 00:35:14.518 "params": { 00:35:14.518 "impl_name": "posix", 00:35:14.518 "recv_buf_size": 2097152, 00:35:14.518 "send_buf_size": 2097152, 00:35:14.518 "enable_recv_pipe": true, 00:35:14.518 "enable_quickack": false, 00:35:14.518 "enable_placement_id": 0, 00:35:14.518 "enable_zerocopy_send_server": true, 00:35:14.518 "enable_zerocopy_send_client": false, 00:35:14.518 "zerocopy_threshold": 0, 00:35:14.518 "tls_version": 0, 00:35:14.518 "enable_ktls": false 00:35:14.518 } 00:35:14.518 } 00:35:14.518 ] 00:35:14.518 }, 00:35:14.518 { 00:35:14.518 "subsystem": "vmd", 00:35:14.518 "config": [] 00:35:14.518 }, 00:35:14.518 { 00:35:14.518 "subsystem": "accel", 00:35:14.518 "config": [ 00:35:14.518 { 00:35:14.518 "method": "accel_set_options", 00:35:14.518 "params": { 00:35:14.518 "small_cache_size": 128, 00:35:14.518 "large_cache_size": 16, 00:35:14.518 "task_count": 2048, 00:35:14.518 "sequence_count": 2048, 00:35:14.518 "buf_count": 2048 00:35:14.518 } 00:35:14.518 } 00:35:14.518 ] 00:35:14.518 }, 00:35:14.518 { 00:35:14.518 "subsystem": "bdev", 00:35:14.518 "config": [ 00:35:14.518 { 00:35:14.518 "method": "bdev_set_options", 00:35:14.518 "params": { 00:35:14.518 "bdev_io_pool_size": 65535, 00:35:14.518 "bdev_io_cache_size": 256, 00:35:14.518 "bdev_auto_examine": true, 00:35:14.518 "iobuf_small_cache_size": 128, 00:35:14.518 "iobuf_large_cache_size": 16 00:35:14.518 } 00:35:14.518 }, 00:35:14.518 { 00:35:14.518 "method": "bdev_raid_set_options", 00:35:14.518 "params": { 00:35:14.518 "process_window_size_kb": 1024, 00:35:14.518 "process_max_bandwidth_mb_sec": 0 00:35:14.518 } 00:35:14.518 }, 00:35:14.518 { 00:35:14.518 "method": "bdev_iscsi_set_options", 00:35:14.518 "params": { 00:35:14.518 "timeout_sec": 30 00:35:14.518 } 00:35:14.518 }, 00:35:14.518 { 00:35:14.518 "method": "bdev_nvme_set_options", 00:35:14.518 "params": { 00:35:14.518 "action_on_timeout": "none", 00:35:14.518 "timeout_us": 0, 00:35:14.518 "timeout_admin_us": 0, 00:35:14.518 "keep_alive_timeout_ms": 10000, 00:35:14.518 "arbitration_burst": 0, 00:35:14.518 "low_priority_weight": 0, 00:35:14.518 "medium_priority_weight": 0, 00:35:14.518 "high_priority_weight": 0, 00:35:14.518 "nvme_adminq_poll_period_us": 10000, 00:35:14.518 "nvme_ioq_poll_period_us": 0, 00:35:14.518 "io_queue_requests": 512, 00:35:14.518 "delay_cmd_submit": true, 00:35:14.518 "transport_retry_count": 4, 00:35:14.518 "bdev_retry_count": 3, 00:35:14.518 "transport_ack_timeout": 0, 00:35:14.519 "ctrlr_loss_timeout_sec": 0, 00:35:14.519 "reconnect_delay_sec": 0, 00:35:14.519 "fast_io_fail_timeout_sec": 0, 00:35:14.519 "disable_auto_failback": false, 00:35:14.519 "generate_uuids": false, 00:35:14.519 "transport_tos": 0, 00:35:14.519 "nvme_error_stat": false, 00:35:14.519 "rdma_srq_size": 0, 00:35:14.519 "io_path_stat": false, 00:35:14.519 "allow_accel_sequence": false, 00:35:14.519 "rdma_max_cq_size": 0, 00:35:14.519 "rdma_cm_event_timeout_ms": 0, 00:35:14.519 "dhchap_digests": [ 00:35:14.519 "sha256", 00:35:14.519 "sha384", 00:35:14.519 "sha512" 00:35:14.519 ], 00:35:14.519 "dhchap_dhgroups": [ 00:35:14.519 "null", 00:35:14.519 "ffdhe2048", 00:35:14.519 "ffdhe3072", 00:35:14.519 "ffdhe4096", 00:35:14.519 "ffdhe6144", 00:35:14.519 "ffdhe8192" 00:35:14.519 ] 00:35:14.519 } 00:35:14.519 }, 00:35:14.519 { 00:35:14.519 "method": "bdev_nvme_attach_controller", 00:35:14.519 "params": { 00:35:14.519 "name": "nvme0", 00:35:14.519 "trtype": "TCP", 00:35:14.519 "adrfam": "IPv4", 00:35:14.519 "traddr": "127.0.0.1", 00:35:14.519 "trsvcid": "4420", 00:35:14.519 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:14.519 "prchk_reftag": false, 00:35:14.519 "prchk_guard": false, 00:35:14.519 "ctrlr_loss_timeout_sec": 0, 00:35:14.519 "reconnect_delay_sec": 0, 00:35:14.519 "fast_io_fail_timeout_sec": 0, 00:35:14.519 "psk": "key0", 00:35:14.519 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:14.519 "hdgst": false, 00:35:14.519 "ddgst": false, 00:35:14.519 "multipath": "multipath" 00:35:14.519 } 00:35:14.519 }, 00:35:14.519 { 00:35:14.519 "method": "bdev_nvme_set_hotplug", 00:35:14.519 "params": { 00:35:14.519 "period_us": 100000, 00:35:14.519 "enable": false 00:35:14.519 } 00:35:14.519 }, 00:35:14.519 { 00:35:14.519 "method": "bdev_wait_for_examine" 00:35:14.519 } 00:35:14.519 ] 00:35:14.519 }, 00:35:14.519 { 00:35:14.519 "subsystem": "nbd", 00:35:14.519 "config": [] 00:35:14.519 } 00:35:14.519 ] 00:35:14.519 }' 00:35:14.519 09:19:30 keyring_file -- keyring/file.sh@115 -- # killprocess 2619887 00:35:14.519 09:19:30 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2619887 ']' 00:35:14.519 09:19:30 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2619887 00:35:14.519 09:19:30 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:14.519 09:19:30 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:14.519 09:19:30 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2619887 00:35:14.519 09:19:30 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:14.519 09:19:30 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:14.519 09:19:30 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2619887' 00:35:14.519 killing process with pid 2619887 00:35:14.519 09:19:30 keyring_file -- common/autotest_common.sh@973 -- # kill 2619887 00:35:14.519 Received shutdown signal, test time was about 1.000000 seconds 00:35:14.519 00:35:14.519 Latency(us) 00:35:14.519 [2024-11-20T08:19:30.560Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:14.519 [2024-11-20T08:19:30.560Z] =================================================================================================================== 00:35:14.519 [2024-11-20T08:19:30.560Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:14.519 09:19:30 keyring_file -- common/autotest_common.sh@978 -- # wait 2619887 00:35:14.519 09:19:30 keyring_file -- keyring/file.sh@118 -- # bperfpid=2621396 00:35:14.519 09:19:30 keyring_file -- keyring/file.sh@120 -- # waitforlisten 2621396 /var/tmp/bperf.sock 00:35:14.519 09:19:30 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 2621396 ']' 00:35:14.519 09:19:30 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:14.519 09:19:30 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:14.519 09:19:30 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:14.519 09:19:30 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:14.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:14.519 09:19:30 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:14.519 "subsystems": [ 00:35:14.519 { 00:35:14.519 "subsystem": "keyring", 00:35:14.519 "config": [ 00:35:14.519 { 00:35:14.519 "method": "keyring_file_add_key", 00:35:14.519 "params": { 00:35:14.519 "name": "key0", 00:35:14.519 "path": "/tmp/tmp.CMMrnfVcoS" 00:35:14.519 } 00:35:14.519 }, 00:35:14.519 { 00:35:14.519 "method": "keyring_file_add_key", 00:35:14.519 "params": { 00:35:14.519 "name": "key1", 00:35:14.519 "path": "/tmp/tmp.ppYaOsuDUO" 00:35:14.519 } 00:35:14.519 } 00:35:14.519 ] 00:35:14.519 }, 00:35:14.519 { 00:35:14.519 "subsystem": "iobuf", 00:35:14.519 "config": [ 00:35:14.519 { 00:35:14.519 "method": "iobuf_set_options", 00:35:14.519 "params": { 00:35:14.519 "small_pool_count": 8192, 00:35:14.519 "large_pool_count": 1024, 00:35:14.519 "small_bufsize": 8192, 00:35:14.519 "large_bufsize": 135168, 00:35:14.519 "enable_numa": false 00:35:14.519 } 00:35:14.519 } 00:35:14.519 ] 00:35:14.519 }, 00:35:14.519 { 00:35:14.519 "subsystem": "sock", 00:35:14.519 "config": [ 00:35:14.519 { 00:35:14.519 "method": "sock_set_default_impl", 00:35:14.519 "params": { 00:35:14.519 "impl_name": "posix" 00:35:14.519 } 00:35:14.519 }, 00:35:14.519 { 00:35:14.519 "method": "sock_impl_set_options", 00:35:14.519 "params": { 00:35:14.519 "impl_name": "ssl", 00:35:14.519 "recv_buf_size": 4096, 00:35:14.519 "send_buf_size": 4096, 00:35:14.519 "enable_recv_pipe": true, 00:35:14.519 "enable_quickack": false, 00:35:14.519 "enable_placement_id": 0, 00:35:14.519 "enable_zerocopy_send_server": true, 00:35:14.519 "enable_zerocopy_send_client": false, 00:35:14.519 "zerocopy_threshold": 0, 00:35:14.519 "tls_version": 0, 00:35:14.519 "enable_ktls": false 00:35:14.519 } 00:35:14.519 }, 00:35:14.519 { 00:35:14.519 "method": "sock_impl_set_options", 00:35:14.519 "params": { 00:35:14.519 "impl_name": "posix", 00:35:14.519 "recv_buf_size": 2097152, 00:35:14.519 "send_buf_size": 2097152, 00:35:14.519 "enable_recv_pipe": true, 00:35:14.519 "enable_quickack": false, 00:35:14.519 "enable_placement_id": 0, 00:35:14.519 "enable_zerocopy_send_server": true, 00:35:14.519 "enable_zerocopy_send_client": false, 00:35:14.519 "zerocopy_threshold": 0, 00:35:14.519 "tls_version": 0, 00:35:14.519 "enable_ktls": false 00:35:14.519 } 00:35:14.519 } 00:35:14.519 ] 00:35:14.519 }, 00:35:14.519 { 00:35:14.519 "subsystem": "vmd", 00:35:14.519 "config": [] 00:35:14.519 }, 00:35:14.519 { 00:35:14.519 "subsystem": "accel", 00:35:14.519 "config": [ 00:35:14.519 { 00:35:14.519 "method": "accel_set_options", 00:35:14.519 "params": { 00:35:14.519 "small_cache_size": 128, 00:35:14.519 "large_cache_size": 16, 00:35:14.519 "task_count": 2048, 00:35:14.519 "sequence_count": 2048, 00:35:14.519 "buf_count": 2048 00:35:14.519 } 00:35:14.519 } 00:35:14.519 ] 00:35:14.519 }, 00:35:14.519 { 00:35:14.519 "subsystem": "bdev", 00:35:14.519 "config": [ 00:35:14.519 { 00:35:14.519 "method": "bdev_set_options", 00:35:14.519 "params": { 00:35:14.519 "bdev_io_pool_size": 65535, 00:35:14.519 "bdev_io_cache_size": 256, 00:35:14.519 "bdev_auto_examine": true, 00:35:14.519 "iobuf_small_cache_size": 128, 00:35:14.519 "iobuf_large_cache_size": 16 00:35:14.519 } 00:35:14.519 }, 00:35:14.519 { 00:35:14.519 "method": "bdev_raid_set_options", 00:35:14.519 "params": { 00:35:14.519 "process_window_size_kb": 1024, 00:35:14.519 "process_max_bandwidth_mb_sec": 0 00:35:14.519 } 00:35:14.519 }, 00:35:14.519 { 00:35:14.519 "method": "bdev_iscsi_set_options", 00:35:14.519 "params": { 00:35:14.519 "timeout_sec": 30 00:35:14.519 } 00:35:14.519 }, 00:35:14.519 { 00:35:14.519 "method": "bdev_nvme_set_options", 00:35:14.519 "params": { 00:35:14.519 "action_on_timeout": "none", 00:35:14.519 "timeout_us": 0, 00:35:14.520 "timeout_admin_us": 0, 00:35:14.520 "keep_alive_timeout_ms": 10000, 00:35:14.520 "arbitration_burst": 0, 00:35:14.520 "low_priority_weight": 0, 00:35:14.520 "medium_priority_weight": 0, 00:35:14.520 "high_priority_weight": 0, 00:35:14.520 "nvme_adminq_poll_period_us": 10000, 00:35:14.520 "nvme_ioq_poll_period_us": 0, 00:35:14.520 "io_queue_requests": 512, 00:35:14.520 "delay_cmd_submit": true, 00:35:14.520 "transport_retry_count": 4, 00:35:14.520 "bdev_retry_count": 3, 00:35:14.520 "transport_ack_timeout": 0, 00:35:14.520 "ctrlr_loss_timeout_sec": 0, 00:35:14.520 "reconnect_delay_sec": 0, 00:35:14.520 "fast_io_fail_timeout_sec": 0, 00:35:14.520 "disable_auto_failback": false, 00:35:14.520 "generate_uuids": false, 00:35:14.520 "transport_tos": 0, 00:35:14.520 "nvme_error_stat": false, 00:35:14.520 "rdma_srq_size": 0, 00:35:14.520 "io_path_stat": false, 00:35:14.520 "allow_accel_sequence": false, 00:35:14.520 "rdma_max_cq_size": 0, 00:35:14.520 "rdma_cm_event_timeout_ms": 0, 00:35:14.520 "dhchap_digests": [ 00:35:14.520 "sha256", 00:35:14.520 "sha384", 00:35:14.520 "sha512" 00:35:14.520 ], 00:35:14.520 "dhchap_dhgroups": [ 00:35:14.520 "null", 00:35:14.520 "ffdhe2048", 00:35:14.520 "ffdhe3072", 00:35:14.520 "ffdhe4096", 00:35:14.520 "ffdhe6144", 00:35:14.520 "ffdhe8192" 00:35:14.520 ] 00:35:14.520 } 00:35:14.520 }, 00:35:14.520 { 00:35:14.520 "method": "bdev_nvme_attach_controller", 00:35:14.520 "params": { 00:35:14.520 "name": "nvme0", 00:35:14.520 "trtype": "TCP", 00:35:14.520 "adrfam": "IPv4", 00:35:14.520 "traddr": "127.0.0.1", 00:35:14.520 "trsvcid": "4420", 00:35:14.520 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:14.520 "prchk_reftag": false, 00:35:14.520 "prchk_guard": false, 00:35:14.520 "ctrlr_loss_timeout_sec": 0, 00:35:14.520 "reconnect_delay_sec": 0, 00:35:14.520 "fast_io_fail_timeout_sec": 0, 00:35:14.520 "psk": "key0", 00:35:14.520 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:14.520 "hdgst": false, 00:35:14.520 "ddgst": false, 00:35:14.520 "multipath": "multipath" 00:35:14.520 } 00:35:14.520 }, 00:35:14.520 { 00:35:14.520 "method": "bdev_nvme_set_hotplug", 00:35:14.520 "params": { 00:35:14.520 "period_us": 100000, 00:35:14.520 "enable": false 00:35:14.520 } 00:35:14.520 }, 00:35:14.520 { 00:35:14.520 "method": "bdev_wait_for_examine" 00:35:14.520 } 00:35:14.520 ] 00:35:14.520 }, 00:35:14.520 { 00:35:14.520 "subsystem": "nbd", 00:35:14.520 "config": [] 00:35:14.520 } 00:35:14.520 ] 00:35:14.520 }' 00:35:14.520 09:19:30 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:14.520 09:19:30 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:14.520 [2024-11-20 09:19:30.514957] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:35:14.520 [2024-11-20 09:19:30.515009] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2621396 ] 00:35:14.779 [2024-11-20 09:19:30.589930] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:14.779 [2024-11-20 09:19:30.630060] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:14.779 [2024-11-20 09:19:30.792439] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:15.345 09:19:31 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:15.345 09:19:31 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:15.345 09:19:31 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:15.345 09:19:31 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:15.345 09:19:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:15.607 09:19:31 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:15.607 09:19:31 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:15.607 09:19:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:15.607 09:19:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:15.607 09:19:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:15.607 09:19:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:15.607 09:19:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:15.865 09:19:31 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:15.865 09:19:31 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:15.865 09:19:31 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:15.865 09:19:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:15.865 09:19:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:15.865 09:19:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:15.865 09:19:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:16.124 09:19:31 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:16.124 09:19:31 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:16.124 09:19:31 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:16.124 09:19:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:16.124 09:19:32 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:16.124 09:19:32 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:16.124 09:19:32 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.CMMrnfVcoS /tmp/tmp.ppYaOsuDUO 00:35:16.124 09:19:32 keyring_file -- keyring/file.sh@20 -- # killprocess 2621396 00:35:16.124 09:19:32 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2621396 ']' 00:35:16.124 09:19:32 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2621396 00:35:16.124 09:19:32 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:16.124 09:19:32 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:16.382 09:19:32 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2621396 00:35:16.382 09:19:32 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:16.382 09:19:32 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:16.382 09:19:32 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2621396' 00:35:16.382 killing process with pid 2621396 00:35:16.382 09:19:32 keyring_file -- common/autotest_common.sh@973 -- # kill 2621396 00:35:16.382 Received shutdown signal, test time was about 1.000000 seconds 00:35:16.382 00:35:16.382 Latency(us) 00:35:16.382 [2024-11-20T08:19:32.423Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:16.382 [2024-11-20T08:19:32.423Z] =================================================================================================================== 00:35:16.382 [2024-11-20T08:19:32.423Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:16.382 09:19:32 keyring_file -- common/autotest_common.sh@978 -- # wait 2621396 00:35:16.382 09:19:32 keyring_file -- keyring/file.sh@21 -- # killprocess 2619870 00:35:16.382 09:19:32 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 2619870 ']' 00:35:16.382 09:19:32 keyring_file -- common/autotest_common.sh@958 -- # kill -0 2619870 00:35:16.382 09:19:32 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:16.382 09:19:32 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:16.382 09:19:32 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2619870 00:35:16.382 09:19:32 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:16.382 09:19:32 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:16.382 09:19:32 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2619870' 00:35:16.382 killing process with pid 2619870 00:35:16.382 09:19:32 keyring_file -- common/autotest_common.sh@973 -- # kill 2619870 00:35:16.382 09:19:32 keyring_file -- common/autotest_common.sh@978 -- # wait 2619870 00:35:16.949 00:35:16.949 real 0m11.869s 00:35:16.949 user 0m29.517s 00:35:16.949 sys 0m2.684s 00:35:16.949 09:19:32 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:16.949 09:19:32 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:16.949 ************************************ 00:35:16.949 END TEST keyring_file 00:35:16.949 ************************************ 00:35:16.949 09:19:32 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:35:16.950 09:19:32 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:16.950 09:19:32 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:16.950 09:19:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:16.950 09:19:32 -- common/autotest_common.sh@10 -- # set +x 00:35:16.950 ************************************ 00:35:16.950 START TEST keyring_linux 00:35:16.950 ************************************ 00:35:16.950 09:19:32 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:16.950 Joined session keyring: 676014253 00:35:16.950 * Looking for test storage... 00:35:16.950 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:16.950 09:19:32 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:16.950 09:19:32 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:35:16.950 09:19:32 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:16.950 09:19:32 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:16.950 09:19:32 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:16.950 09:19:32 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:16.950 09:19:32 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:16.950 09:19:32 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:16.950 09:19:32 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:16.950 09:19:32 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:16.950 09:19:32 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:16.950 09:19:32 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:16.950 09:19:32 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:16.950 09:19:32 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:16.950 09:19:32 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:16.950 09:19:32 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:16.950 09:19:32 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:16.950 09:19:32 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:16.950 09:19:32 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:16.950 09:19:32 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:16.950 09:19:32 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:16.950 09:19:32 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:16.950 09:19:32 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:16.950 09:19:32 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:16.950 09:19:32 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:16.950 09:19:32 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:16.950 09:19:32 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:16.950 09:19:32 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:16.950 09:19:32 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:16.950 09:19:32 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:16.950 09:19:32 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:16.950 09:19:32 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:16.950 09:19:32 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:16.950 09:19:32 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:16.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.950 --rc genhtml_branch_coverage=1 00:35:16.950 --rc genhtml_function_coverage=1 00:35:16.950 --rc genhtml_legend=1 00:35:16.950 --rc geninfo_all_blocks=1 00:35:16.950 --rc geninfo_unexecuted_blocks=1 00:35:16.950 00:35:16.950 ' 00:35:16.950 09:19:32 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:16.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.950 --rc genhtml_branch_coverage=1 00:35:16.950 --rc genhtml_function_coverage=1 00:35:16.950 --rc genhtml_legend=1 00:35:16.950 --rc geninfo_all_blocks=1 00:35:16.950 --rc geninfo_unexecuted_blocks=1 00:35:16.950 00:35:16.950 ' 00:35:16.950 09:19:32 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:16.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.950 --rc genhtml_branch_coverage=1 00:35:16.950 --rc genhtml_function_coverage=1 00:35:16.950 --rc genhtml_legend=1 00:35:16.950 --rc geninfo_all_blocks=1 00:35:16.950 --rc geninfo_unexecuted_blocks=1 00:35:16.950 00:35:16.950 ' 00:35:16.950 09:19:32 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:16.950 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:16.950 --rc genhtml_branch_coverage=1 00:35:16.950 --rc genhtml_function_coverage=1 00:35:16.950 --rc genhtml_legend=1 00:35:16.950 --rc geninfo_all_blocks=1 00:35:16.950 --rc geninfo_unexecuted_blocks=1 00:35:16.950 00:35:16.950 ' 00:35:16.950 09:19:32 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:16.950 09:19:32 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:16.950 09:19:32 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:16.950 09:19:32 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:16.950 09:19:32 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:16.950 09:19:32 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:16.950 09:19:32 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:16.950 09:19:32 keyring_linux -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:16.950 09:19:32 keyring_linux -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:35:16.950 09:19:32 keyring_linux -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:16.950 09:19:32 keyring_linux -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:35:17.209 09:19:32 keyring_linux -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:35:17.209 09:19:32 keyring_linux -- nvmf/common.sh@16 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:35:17.209 09:19:32 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:17.209 09:19:32 keyring_linux -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:35:17.209 09:19:32 keyring_linux -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:35:17.209 09:19:32 keyring_linux -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:17.209 09:19:32 keyring_linux -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:17.209 09:19:32 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:17.209 09:19:32 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:17.209 09:19:32 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:17.209 09:19:32 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:17.209 09:19:32 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.209 09:19:32 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.209 09:19:32 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.209 09:19:32 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:17.209 09:19:32 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:17.209 09:19:32 keyring_linux -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:35:17.209 09:19:32 keyring_linux -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:35:17.209 09:19:32 keyring_linux -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:17.209 09:19:32 keyring_linux -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:35:17.209 09:19:32 keyring_linux -- nvmf/common.sh@50 -- # : 0 00:35:17.209 09:19:32 keyring_linux -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:35:17.209 09:19:32 keyring_linux -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:35:17.209 09:19:33 keyring_linux -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:35:17.209 09:19:33 keyring_linux -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:17.210 09:19:33 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:17.210 09:19:33 keyring_linux -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:35:17.210 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:35:17.210 09:19:33 keyring_linux -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:35:17.210 09:19:33 keyring_linux -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:35:17.210 09:19:33 keyring_linux -- nvmf/common.sh@54 -- # have_pci_nics=0 00:35:17.210 09:19:33 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:17.210 09:19:33 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:17.210 09:19:33 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:17.210 09:19:33 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:17.210 09:19:33 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:17.210 09:19:33 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:17.210 09:19:33 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:17.210 09:19:33 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:17.210 09:19:33 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:17.210 09:19:33 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:17.210 09:19:33 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:17.210 09:19:33 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:17.210 09:19:33 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:17.210 09:19:33 keyring_linux -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:17.210 09:19:33 keyring_linux -- nvmf/common.sh@504 -- # local prefix key digest 00:35:17.210 09:19:33 keyring_linux -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:35:17.210 09:19:33 keyring_linux -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:35:17.210 09:19:33 keyring_linux -- nvmf/common.sh@506 -- # digest=0 00:35:17.210 09:19:33 keyring_linux -- nvmf/common.sh@507 -- # python - 00:35:17.210 09:19:33 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:17.210 09:19:33 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:17.210 /tmp/:spdk-test:key0 00:35:17.210 09:19:33 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:17.210 09:19:33 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:17.210 09:19:33 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:17.210 09:19:33 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:17.210 09:19:33 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:17.210 09:19:33 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:17.210 09:19:33 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:17.210 09:19:33 keyring_linux -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:17.210 09:19:33 keyring_linux -- nvmf/common.sh@504 -- # local prefix key digest 00:35:17.210 09:19:33 keyring_linux -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:35:17.210 09:19:33 keyring_linux -- nvmf/common.sh@506 -- # key=112233445566778899aabbccddeeff00 00:35:17.210 09:19:33 keyring_linux -- nvmf/common.sh@506 -- # digest=0 00:35:17.210 09:19:33 keyring_linux -- nvmf/common.sh@507 -- # python - 00:35:17.210 09:19:33 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:17.210 09:19:33 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:17.210 /tmp/:spdk-test:key1 00:35:17.210 09:19:33 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=2621953 00:35:17.210 09:19:33 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 2621953 00:35:17.210 09:19:33 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:17.210 09:19:33 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2621953 ']' 00:35:17.210 09:19:33 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:17.210 09:19:33 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:17.210 09:19:33 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:17.210 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:17.210 09:19:33 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:17.210 09:19:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:17.210 [2024-11-20 09:19:33.147122] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:35:17.210 [2024-11-20 09:19:33.147174] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2621953 ] 00:35:17.210 [2024-11-20 09:19:33.223687] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:17.469 [2024-11-20 09:19:33.268625] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:17.469 09:19:33 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:17.469 09:19:33 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:17.469 09:19:33 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:17.469 09:19:33 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:17.469 09:19:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:17.469 [2024-11-20 09:19:33.491137] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:17.755 null0 00:35:17.755 [2024-11-20 09:19:33.523184] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:17.755 [2024-11-20 09:19:33.523548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:17.755 09:19:33 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:17.755 09:19:33 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:17.755 1059576767 00:35:17.755 09:19:33 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:17.755 790367385 00:35:17.755 09:19:33 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=2621989 00:35:17.755 09:19:33 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 2621989 /var/tmp/bperf.sock 00:35:17.755 09:19:33 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:17.755 09:19:33 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 2621989 ']' 00:35:17.755 09:19:33 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:17.755 09:19:33 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:17.755 09:19:33 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:17.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:17.755 09:19:33 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:17.755 09:19:33 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:17.755 [2024-11-20 09:19:33.595036] Starting SPDK v25.01-pre git sha1 1c7c7c64f / DPDK 24.03.0 initialization... 00:35:17.755 [2024-11-20 09:19:33.595079] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2621989 ] 00:35:17.755 [2024-11-20 09:19:33.668566] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:17.755 [2024-11-20 09:19:33.711752] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:17.755 09:19:33 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:17.755 09:19:33 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:17.755 09:19:33 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:17.755 09:19:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:18.012 09:19:33 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:18.012 09:19:33 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:18.270 09:19:34 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:18.270 09:19:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:18.528 [2024-11-20 09:19:34.408956] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:18.528 nvme0n1 00:35:18.528 09:19:34 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:18.528 09:19:34 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:18.528 09:19:34 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:18.528 09:19:34 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:18.528 09:19:34 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:18.528 09:19:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:18.786 09:19:34 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:18.786 09:19:34 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:18.786 09:19:34 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:18.786 09:19:34 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:18.786 09:19:34 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:18.786 09:19:34 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:18.786 09:19:34 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:19.043 09:19:34 keyring_linux -- keyring/linux.sh@25 -- # sn=1059576767 00:35:19.043 09:19:34 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:19.043 09:19:34 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:19.043 09:19:34 keyring_linux -- keyring/linux.sh@26 -- # [[ 1059576767 == \1\0\5\9\5\7\6\7\6\7 ]] 00:35:19.043 09:19:34 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1059576767 00:35:19.043 09:19:34 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:19.043 09:19:34 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:19.043 Running I/O for 1 seconds... 00:35:19.976 21278.00 IOPS, 83.12 MiB/s 00:35:19.976 Latency(us) 00:35:19.976 [2024-11-20T08:19:36.017Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:19.976 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:35:19.976 nvme0n1 : 1.01 21277.34 83.11 0.00 0.00 5996.18 5157.40 12537.32 00:35:19.976 [2024-11-20T08:19:36.017Z] =================================================================================================================== 00:35:19.976 [2024-11-20T08:19:36.017Z] Total : 21277.34 83.11 0.00 0.00 5996.18 5157.40 12537.32 00:35:19.976 { 00:35:19.976 "results": [ 00:35:19.976 { 00:35:19.976 "job": "nvme0n1", 00:35:19.976 "core_mask": "0x2", 00:35:19.976 "workload": "randread", 00:35:19.976 "status": "finished", 00:35:19.976 "queue_depth": 128, 00:35:19.976 "io_size": 4096, 00:35:19.976 "runtime": 1.006047, 00:35:19.976 "iops": 21277.335949513294, 00:35:19.976 "mibps": 83.1145935527863, 00:35:19.976 "io_failed": 0, 00:35:19.976 "io_timeout": 0, 00:35:19.976 "avg_latency_us": 5996.181669665962, 00:35:19.976 "min_latency_us": 5157.398260869565, 00:35:19.976 "max_latency_us": 12537.321739130435 00:35:19.976 } 00:35:19.976 ], 00:35:19.976 "core_count": 1 00:35:19.976 } 00:35:20.234 09:19:36 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:20.234 09:19:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:20.234 09:19:36 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:35:20.234 09:19:36 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:35:20.234 09:19:36 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:20.234 09:19:36 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:20.234 09:19:36 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:20.234 09:19:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:20.492 09:19:36 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:35:20.492 09:19:36 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:20.492 09:19:36 keyring_linux -- keyring/linux.sh@23 -- # return 00:35:20.492 09:19:36 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:20.492 09:19:36 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:35:20.492 09:19:36 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:20.492 09:19:36 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:20.492 09:19:36 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:20.492 09:19:36 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:20.492 09:19:36 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:20.492 09:19:36 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:20.492 09:19:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:35:20.750 [2024-11-20 09:19:36.602287] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:20.750 [2024-11-20 09:19:36.602516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1276a70 (107): Transport endpoint is not connected 00:35:20.750 [2024-11-20 09:19:36.603511] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1276a70 (9): Bad file descriptor 00:35:20.750 [2024-11-20 09:19:36.604512] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:20.750 [2024-11-20 09:19:36.604524] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:20.750 [2024-11-20 09:19:36.604531] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:20.750 [2024-11-20 09:19:36.604540] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:20.750 request: 00:35:20.750 { 00:35:20.750 "name": "nvme0", 00:35:20.750 "trtype": "tcp", 00:35:20.750 "traddr": "127.0.0.1", 00:35:20.750 "adrfam": "ipv4", 00:35:20.750 "trsvcid": "4420", 00:35:20.750 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:20.750 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:20.750 "prchk_reftag": false, 00:35:20.750 "prchk_guard": false, 00:35:20.750 "hdgst": false, 00:35:20.750 "ddgst": false, 00:35:20.750 "psk": ":spdk-test:key1", 00:35:20.750 "allow_unrecognized_csi": false, 00:35:20.750 "method": "bdev_nvme_attach_controller", 00:35:20.750 "req_id": 1 00:35:20.750 } 00:35:20.750 Got JSON-RPC error response 00:35:20.750 response: 00:35:20.750 { 00:35:20.750 "code": -5, 00:35:20.750 "message": "Input/output error" 00:35:20.750 } 00:35:20.750 09:19:36 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:35:20.750 09:19:36 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:20.750 09:19:36 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:20.750 09:19:36 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:20.750 09:19:36 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:35:20.750 09:19:36 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:20.750 09:19:36 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:35:20.750 09:19:36 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:35:20.750 09:19:36 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:35:20.750 09:19:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:20.750 09:19:36 keyring_linux -- keyring/linux.sh@33 -- # sn=1059576767 00:35:20.750 09:19:36 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1059576767 00:35:20.750 1 links removed 00:35:20.750 09:19:36 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:35:20.750 09:19:36 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:35:20.750 09:19:36 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:35:20.750 09:19:36 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:35:20.750 09:19:36 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:35:20.750 09:19:36 keyring_linux -- keyring/linux.sh@33 -- # sn=790367385 00:35:20.750 09:19:36 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 790367385 00:35:20.751 1 links removed 00:35:20.751 09:19:36 keyring_linux -- keyring/linux.sh@41 -- # killprocess 2621989 00:35:20.751 09:19:36 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2621989 ']' 00:35:20.751 09:19:36 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2621989 00:35:20.751 09:19:36 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:20.751 09:19:36 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:20.751 09:19:36 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2621989 00:35:20.751 09:19:36 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:20.751 09:19:36 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:20.751 09:19:36 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2621989' 00:35:20.751 killing process with pid 2621989 00:35:20.751 09:19:36 keyring_linux -- common/autotest_common.sh@973 -- # kill 2621989 00:35:20.751 Received shutdown signal, test time was about 1.000000 seconds 00:35:20.751 00:35:20.751 Latency(us) 00:35:20.751 [2024-11-20T08:19:36.792Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:20.751 [2024-11-20T08:19:36.792Z] =================================================================================================================== 00:35:20.751 [2024-11-20T08:19:36.792Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:20.751 09:19:36 keyring_linux -- common/autotest_common.sh@978 -- # wait 2621989 00:35:21.009 09:19:36 keyring_linux -- keyring/linux.sh@42 -- # killprocess 2621953 00:35:21.009 09:19:36 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 2621953 ']' 00:35:21.009 09:19:36 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 2621953 00:35:21.009 09:19:36 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:35:21.009 09:19:36 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:21.009 09:19:36 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 2621953 00:35:21.009 09:19:36 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:21.009 09:19:36 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:21.009 09:19:36 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 2621953' 00:35:21.009 killing process with pid 2621953 00:35:21.009 09:19:36 keyring_linux -- common/autotest_common.sh@973 -- # kill 2621953 00:35:21.009 09:19:36 keyring_linux -- common/autotest_common.sh@978 -- # wait 2621953 00:35:21.267 00:35:21.267 real 0m4.396s 00:35:21.267 user 0m8.329s 00:35:21.267 sys 0m1.437s 00:35:21.267 09:19:37 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:21.267 09:19:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:21.267 ************************************ 00:35:21.267 END TEST keyring_linux 00:35:21.267 ************************************ 00:35:21.267 09:19:37 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:35:21.267 09:19:37 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:35:21.267 09:19:37 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:35:21.267 09:19:37 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:35:21.267 09:19:37 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:35:21.267 09:19:37 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:35:21.267 09:19:37 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:35:21.267 09:19:37 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:21.267 09:19:37 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:21.267 09:19:37 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:21.267 09:19:37 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:21.267 09:19:37 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:21.267 09:19:37 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:21.267 09:19:37 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:21.267 09:19:37 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:21.267 09:19:37 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:21.267 09:19:37 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:21.267 09:19:37 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:21.267 09:19:37 -- common/autotest_common.sh@10 -- # set +x 00:35:21.267 09:19:37 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:21.267 09:19:37 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:21.267 09:19:37 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:21.267 09:19:37 -- common/autotest_common.sh@10 -- # set +x 00:35:26.724 INFO: APP EXITING 00:35:26.724 INFO: killing all VMs 00:35:26.724 INFO: killing vhost app 00:35:26.724 INFO: EXIT DONE 00:35:29.258 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:35:29.258 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:35:29.258 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:35:29.258 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:35:29.258 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:35:29.259 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:35:29.259 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:35:29.259 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:35:29.259 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:35:29.259 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:35:29.259 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:35:29.259 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:35:29.259 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:35:29.259 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:35:29.259 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:35:29.259 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:35:29.259 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:35:32.546 Cleaning 00:35:32.546 Removing: /var/run/dpdk/spdk0/config 00:35:32.546 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:32.546 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:32.546 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:32.546 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:32.546 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:35:32.546 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:35:32.546 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:35:32.546 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:35:32.546 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:32.546 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:32.546 Removing: /var/run/dpdk/spdk1/config 00:35:32.546 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:35:32.546 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:35:32.546 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:35:32.546 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:35:32.546 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:35:32.546 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:35:32.546 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:35:32.546 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:35:32.546 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:35:32.546 Removing: /var/run/dpdk/spdk1/hugepage_info 00:35:32.546 Removing: /var/run/dpdk/spdk2/config 00:35:32.546 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:35:32.546 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:35:32.546 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:35:32.546 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:35:32.546 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:35:32.546 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:35:32.546 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:35:32.546 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:35:32.546 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:35:32.546 Removing: /var/run/dpdk/spdk2/hugepage_info 00:35:32.546 Removing: /var/run/dpdk/spdk3/config 00:35:32.546 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:35:32.546 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:35:32.546 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:35:32.546 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:35:32.546 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:35:32.546 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:35:32.546 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:35:32.546 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:35:32.546 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:35:32.546 Removing: /var/run/dpdk/spdk3/hugepage_info 00:35:32.546 Removing: /var/run/dpdk/spdk4/config 00:35:32.546 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:35:32.546 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:35:32.546 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:35:32.546 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:35:32.546 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:35:32.546 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:35:32.546 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:35:32.546 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:35:32.546 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:35:32.546 Removing: /var/run/dpdk/spdk4/hugepage_info 00:35:32.546 Removing: /dev/shm/bdev_svc_trace.1 00:35:32.546 Removing: /dev/shm/nvmf_trace.0 00:35:32.546 Removing: /dev/shm/spdk_tgt_trace.pid2140823 00:35:32.546 Removing: /var/run/dpdk/spdk0 00:35:32.546 Removing: /var/run/dpdk/spdk1 00:35:32.546 Removing: /var/run/dpdk/spdk2 00:35:32.546 Removing: /var/run/dpdk/spdk3 00:35:32.546 Removing: /var/run/dpdk/spdk4 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2138683 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2139748 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2140823 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2141464 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2142406 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2142429 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2143428 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2143630 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2143884 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2145594 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2147119 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2147595 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2147878 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2148182 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2148472 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2148732 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2148979 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2149263 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2150005 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2153009 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2153268 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2153528 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2153531 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2154025 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2154036 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2154524 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2154533 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2154791 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2154814 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2155055 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2155284 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2155837 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2156009 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2156348 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2160126 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2164416 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2174691 00:35:32.546 Removing: /var/run/dpdk/spdk_pid2175368 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2179690 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2179938 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2184228 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2190134 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2193269 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2203728 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2221612 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2225414 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2227258 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2228178 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2232996 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2279320 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2284741 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2290672 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2297581 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2297583 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2298494 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2299410 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2300161 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2300796 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2300802 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2301069 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2301259 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2301264 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2302180 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2303078 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2303842 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2304471 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2304478 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2304712 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2305784 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2306833 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2315046 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2344423 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2348957 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2350648 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2352414 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2352645 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2352782 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2352900 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2353405 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2355237 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2356004 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2356499 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2358683 00:35:32.547 Removing: /var/run/dpdk/spdk_pid2359097 00:35:32.805 Removing: /var/run/dpdk/spdk_pid2359817 00:35:32.805 Removing: /var/run/dpdk/spdk_pid2364234 00:35:32.805 Removing: /var/run/dpdk/spdk_pid2370079 00:35:32.805 Removing: /var/run/dpdk/spdk_pid2370081 00:35:32.805 Removing: /var/run/dpdk/spdk_pid2370083 00:35:32.805 Removing: /var/run/dpdk/spdk_pid2374055 00:35:32.805 Removing: /var/run/dpdk/spdk_pid2382430 00:35:32.805 Removing: /var/run/dpdk/spdk_pid2386269 00:35:32.805 Removing: /var/run/dpdk/spdk_pid2392509 00:35:32.805 Removing: /var/run/dpdk/spdk_pid2393843 00:35:32.805 Removing: /var/run/dpdk/spdk_pid2395421 00:35:32.805 Removing: /var/run/dpdk/spdk_pid2396973 00:35:32.805 Removing: /var/run/dpdk/spdk_pid2401605 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2406063 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2413498 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2413619 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2418902 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2419022 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2419154 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2419605 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2419631 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2424221 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2424702 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2429261 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2431811 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2437463 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2446268 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2453278 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2453286 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2472765 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2473000 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2479099 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2479376 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2484774 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2485282 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2485789 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2486447 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2487186 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2487663 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2488134 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2488827 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2492881 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2498026 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2503703 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2507766 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2512372 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2522336 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2522819 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2527082 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2527325 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2531591 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2537360 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2539908 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2550004 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2567412 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2571205 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2572813 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2573727 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2578471 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2581230 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2589146 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2589154 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2594220 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2596182 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2598143 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2599192 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2601288 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2602960 00:35:32.806 Removing: /var/run/dpdk/spdk_pid2611716 00:35:33.064 Removing: /var/run/dpdk/spdk_pid2612184 00:35:33.064 Removing: /var/run/dpdk/spdk_pid2612643 00:35:33.064 Removing: /var/run/dpdk/spdk_pid2614928 00:35:33.064 Removing: /var/run/dpdk/spdk_pid2615490 00:35:33.064 Removing: /var/run/dpdk/spdk_pid2616054 00:35:33.064 Removing: /var/run/dpdk/spdk_pid2619870 00:35:33.064 Removing: /var/run/dpdk/spdk_pid2619887 00:35:33.064 Removing: /var/run/dpdk/spdk_pid2621396 00:35:33.064 Removing: /var/run/dpdk/spdk_pid2621953 00:35:33.064 Removing: /var/run/dpdk/spdk_pid2621989 00:35:33.064 Clean 00:35:33.064 09:19:48 -- common/autotest_common.sh@1453 -- # return 0 00:35:33.064 09:19:48 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:33.064 09:19:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:33.064 09:19:48 -- common/autotest_common.sh@10 -- # set +x 00:35:33.064 09:19:48 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:33.064 09:19:48 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:33.064 09:19:48 -- common/autotest_common.sh@10 -- # set +x 00:35:33.064 09:19:49 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:35:33.064 09:19:49 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:35:33.064 09:19:49 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:35:33.064 09:19:49 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:33.064 09:19:49 -- spdk/autotest.sh@398 -- # hostname 00:35:33.064 09:19:49 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:35:33.322 geninfo: WARNING: invalid characters removed from testname! 00:35:55.252 09:20:10 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:57.158 09:20:12 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:35:59.062 09:20:14 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:00.965 09:20:16 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:02.869 09:20:18 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:04.772 09:20:20 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:06.678 09:20:22 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:06.678 09:20:22 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:06.678 09:20:22 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:36:06.678 09:20:22 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:06.678 09:20:22 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:06.678 09:20:22 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:06.678 + [[ -n 2061722 ]] 00:36:06.678 + sudo kill 2061722 00:36:06.687 [Pipeline] } 00:36:06.704 [Pipeline] // stage 00:36:06.710 [Pipeline] } 00:36:06.723 [Pipeline] // timeout 00:36:06.727 [Pipeline] } 00:36:06.741 [Pipeline] // catchError 00:36:06.746 [Pipeline] } 00:36:06.759 [Pipeline] // wrap 00:36:06.764 [Pipeline] } 00:36:06.776 [Pipeline] // catchError 00:36:06.783 [Pipeline] stage 00:36:06.784 [Pipeline] { (Epilogue) 00:36:06.794 [Pipeline] catchError 00:36:06.795 [Pipeline] { 00:36:06.806 [Pipeline] echo 00:36:06.808 Cleanup processes 00:36:06.814 [Pipeline] sh 00:36:07.098 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:07.098 2632642 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:07.112 [Pipeline] sh 00:36:07.402 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:07.403 ++ grep -v 'sudo pgrep' 00:36:07.403 ++ awk '{print $1}' 00:36:07.403 + sudo kill -9 00:36:07.403 + true 00:36:07.415 [Pipeline] sh 00:36:07.701 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:19.913 [Pipeline] sh 00:36:20.196 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:20.196 Artifacts sizes are good 00:36:20.209 [Pipeline] archiveArtifacts 00:36:20.215 Archiving artifacts 00:36:20.341 [Pipeline] sh 00:36:20.720 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:36:20.733 [Pipeline] cleanWs 00:36:20.742 [WS-CLEANUP] Deleting project workspace... 00:36:20.742 [WS-CLEANUP] Deferred wipeout is used... 00:36:20.748 [WS-CLEANUP] done 00:36:20.749 [Pipeline] } 00:36:20.765 [Pipeline] // catchError 00:36:20.776 [Pipeline] sh 00:36:21.056 + logger -p user.info -t JENKINS-CI 00:36:21.065 [Pipeline] } 00:36:21.079 [Pipeline] // stage 00:36:21.084 [Pipeline] } 00:36:21.098 [Pipeline] // node 00:36:21.103 [Pipeline] End of Pipeline 00:36:21.138 Finished: SUCCESS